I am working on generating 3d model from 2d input images. For the data set part i have .jpeg files for 2d and .OFF format files for their 3d. The problem i am facing here is how do i read this files or what is the method to feed these to my network?
Related
In my dataset folder, I have images in jpg format and segmentation(masked images) in json format. I want to load this dataset accordingly to train using basic U-Net architecture. In most of the code U-Net is trained on masked images which are either in png or jpg format. For each image there is annotation in the format of json file. What code can I use to load these masked images(which are in json format) for training purpose?
I am writing a python API (WMS format) to fetch the gridded weather data(rainfall) and convert it into png images. I am using GDAL for this and involves below steps:
Fetching data from database using the BBOX from WMS
Rasterizing the data using gdal.Rasterize and converting the data into tiff files
Then applying style to the tiff using gdal.DEMProcessing and converting the tiff into png and returning the PNG images as output from the API.
Consuming this WMS API from openlayers to overlay the data on map.
Problem:
The problem I am facing is I am unable to get proper image size of the generated images because of which the data is shown as below in the image. Need help to know what is the correct set of options to be used in rasterize or demprocessing to get correct output. Sample code:
options = gdal.RasterizeOptions(
format='GTiff', attribute='value', noData=NoData_value,
width=w, height=h,
outputType=gdalconst.GDT_Float32,
outputBounds=[x_min,y_min,x_max,y_max],
outputSRS=crs)
I created a blender project, exported it as JSON+BIN files and showed it through the blend4web webplayer. Everything goes fine until my need to change the JSON file programmatically to add/remove a new 3D object (e.g. a cube or rectangle). I want to renderize the already showed 3D model adding or removing a 3D object. However, since the exportation of a project to blend4web generate a .bin file, if I change only the .json the model was not showed as expected. Considering this scenario, the unique way to change the model is modifing the .blend file and exports it again from Blender. But based on my need, I can't add a new 3D object in a Blender project programmatically considering the .json file. Additionally, the .json file which I am updating/using is considering data retrieved from a Database, such data indicates how and the position where the new 3D object will be displayed in the scene, which hinder me to use Blender to create a 3D Model modificated.
In this perspective, I need help to identify:
Which is the best way to change the scene and show it in blend4web programmatically and mainly considering a .json as an input for the model?
Is there any python script to user a blend4web .json file as an input to be compiled, without the Blender project and regenerate the .bin file, making it possible to show correctly my 3D model using the blend4web webplayer for JSON files?
Or, is there some (easy) way to modify a project from Blender considering data in JSON format, compile and generate the files to be shown programmatically by the blend4web webplayer (for JSON)?
Thanks in advance.
First of all: I'm a skilled developer and a total noob of 3D solid files/drawing.
I started playing with webgl and three.js. My task is to port a solid 3D file(i.e. STEP/IGES) to a web page(a sort of a cad viewer).
I started from this example:
http://www.johannes-raida.de/jnetcad/RadialEngine.htm
I want to obtain something like the above link with
a navigation tree and hide/show layers functionality.
The above link has sever json file, one for layer.
I want to obtain: a threejs-json file for each layer to get the hide/show functionality.
Now, I have a solid file (STEP format: .STP). That file contains layers. I want to obtain a json(three.js) file for each layer.
Questions are:
how to export to threejs-json using free software? I read that the best method could be: STEP > Wavefront OBJ [using freecad?] > treejs-json [using blender?]
Does the collada format is better than obj?
Shoud I have to manually export each single layer to json?
Is there an utility out there to generate all the layers(separated files)(json-threejs format) from a 3d file?
So, I did it by myself.
Three.js is a great library but it requires some 3d skills.
Here are my answers:
how to export to threejs-json using free software?
Well, my suggestion is to convert your solid file to DAE, use anything you want.
Then open DAE with Blender and use the exporter script of three.js:
https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender
Shoud I have to manually export each single layer to json?
No, the 3D file has all the informations about layers. So you can use a single file.
Is there an utility out there to generate all the layers?
I don't have found somethig good out there.
If you want to create single layers, you can split the json file created with blender.
I want to render 3D graphics files of autodesk (.dwg and .dwf) using three.js but three.js requires 3D data to be in json format. So, I need to convert these files to three.js readable json format. I tried searching on the internet but couldn't find any solution. Can anyone tell me a good converter for these files?
Thanks in advance.
In fact Autodesk already have a converter & wegbl viewer. Go to http://developer.autodesk.com and get a key for View & Data API. There is a server side REST API that allow you to upload a CAD file and convert to a JSON stream. You can hook to it and get the output. Or, even easier, just use the JavaScript client side API to embed the viewer on your website/app
Update
The API was renamed to Model Derivative + Viewer, the first translates the source file (e.g. DWG, RVT and many others) to a web-friendly format that can be viewed on the second, which is based on Three.js (and can be customized).