I'm new in NeRF. I'm trying to make view synthesis using 2d face image dataset like FFHQ.
I extracted camera pose as below from my trained model to get UV_postion map.
3.815960444873149338e-01 2.011289213814117585e-02 -2.146695841125471627e-01 1.331593756459746203e+02
4.556725709716180628e-02 4.190045369798199304e-01 1.202577357700833210e-01 -1.186529566109642815e+02
2.107396114968792533e-01 -1.270187761779554281e-01 3.627094520218327456e-01 1.925994034523564835e+01
now, i'm wonder it is camera to world parameter(c2w)? or world to camera parameter(w2c)?
I know it is need c2w camera parameter to train the NeRF model. (I also know there are several frameworks but I want to try step by step).
PRN official github (https://github.com/YadiraF/PRNet)
I thought it was c2w parameters and i tried to train several images with differnt cam pose.
but it dosen't work to me
My environment is
os : ubuntu
gpu : nvidia rtx A5000
Related
I trained my custom yolo dataset, now my requirement is to feed an image to the model and get back a json with founded classes with scores and bounding boxes.
When I run detect.py, it looks that there is no option to configure this script to my needs, and I can only get back an image with bounding box and score placed on it.
How did you solve this problem?
Did you took parts of detect.py and modified to your needs, or is there probably a solution to it?
I have found the answer to it.
Once you have your model, you need this script to make it work and send the answer to a calling program.
I had just some problems with the paths, but in the end figured it out.
If you think it can be better, please let me know:
import os
import torch
# Important: Path here is taken not from the folder where the script is positioned, but from the root project folder
model = torch.hub.load(os.getcwd()+'/yolov5','custom', path='./yolov5/runs/train/lemon3/weights/best.pt', source='local', force_reload=True)
results = model("./dataset/test/images/fridge2.jpg")
# cord_thres contains 4 corners of bounding box, 5th array parameter is confidence score
labels, cord_thres = results.xyxyn[0][:, -1].numpy(), results.xyxyn[0][:, :-1].numpy()
print(labels, cord_thres)
With modelcoordination we experience a slowdown in the application Navisworks when accessing properties.
Our app searches for properties and creates searchsets automated to save time. We use the Navisworks API to do so using the functionality of:
ModelItemCollection searchResults = s.FindAll(Autodesk.Navisworks.Api.Application.ActiveDocument, true);
We redefine "s" and "searchResults" foreach needed searchset and save the search as SavedItem (SearchSet).
Because we create a lot of SearchSets using this method the slowdown is more noticeable.
This action in a model from BIM 360 Glue (Classic) takes up to 10 seconds, where the same model in Model Coordination (next gen) takes more than 30 minutes.
The slowdown is visible unregarding our app. Also when clicking on properties in the selection tree, or in SearchSets the slowdown occurs.
1st question: Missing room models
I used the model-derivative api from Forge with generatedMasterViews params to extract room nodes from the cloud Revit model (BIM360Team) it work perfectly fine for some model but I had the missing room issues with others.
Successful translated model
Some of the translated model's room nodes was missing and that's weird because their height and phase are the same and the rooms in 2d view were shown.
2D views has no problem at all
3D room nodes in the same floor were missing
So my question is, are their some limitation to extract the room geometry and convert to .svf files? e.g. publish view setting / view range / crop views / crop region ? or something I'm missing in my request ? I tried to align all the parameter or remove some component (e.g. furniture) but the result still the same. If you guys had noticed this issues before please help, your help would be appreciate.
2nd question: Room mesh and materials
I've write the function to highlight the selected elements and its work with almost every nodes except the room elements.
const highLightElement = (dbId, hexColor) => {
const myMaterial = createMaterial(hexColor)
const instanceTree = viewerApp.model.getInstanceTree()
const fragList = viewerApp.model.getFragmentList()
instanceTree.enumNodeFragments(dbId, function(fragId) {
fragList.setMaterial(fragId, myMaterial)
const fragProxy = viewerApp.impl.getFragmentProxy(viewerApp.model, fragId)
fragProxy.scale = new window.THREE.Vector3(scaleRatio, scaleRatio, scaleRatio)
fragProxy.updateAnimTransform()
}, true)
viewerApp.impl.invalidate(true)
}
The material of the room was gloomy white but if use section tool on it the section color already change to the color I choose (0xAB00EE - magenta). I don't know that is the room mesh different from others? or might be need some special procedure to do so?
Successful coloring nodes
Room nodes coloring
Z-section of colored room nodes
Regarding issue #1, it's possible that there are certain limitations preventing the Forge Model Derivative service from extracting room information but that would be more on the Revit side. If you wouldn't mind sharing the Revit model with us (confidentially) via forge (dot) help (at) autodesk (dot) com, we'd be happy to investigate it with the engineering team. In that case, please indicate which rooms are missing, too.
Regarding issue #2, from the Forge Viewer standpoint, rooms are just another geometry in the scene, with the only difference that this geometry is usually transparent and hidden by default. As far as I know, the visibility flag should not interfere with something like setting the material of a fragment but double-check if that really isn't the root cause.
I'm using Autodesk Forge via the official Node.js SDK. All dimensions in my Revit model are set in meters, but, for some reason, when I retrieve the model from Forge, I get everything in English feets (preliminary I convert my .rvt file to .nwc). While I can convert feets to meters on my own, this is really inconvenient. Is there any straightforward way to get everything in meters?
Unfortunately, this might be not possible. According to my experience, it depends on the internal units of the source model. For example, the internal length units of the Revit model is English foot, so the units of the svf, the translated result format of the Forge, will still remain foot. Forge model loader won't change this during loading model, although the default units of the Forge Viewer is meter. We apologize for any inconvenience caused. However, there is a helper function to help you convert units in the Forge Viewer like this:
var length = 1; // 1 feet
var lengthConverted = Autodesk.Viewing.Private.convertUnits( viewer.model.getUnitString(), 'm', 1, length );
Hope this help.
Briefly:
How to parametrize .prj WKT file so that I can perform 7 point tranformation (wiki). I know how false_easting and false_northing params work, but how can I adjust scale? I do not mean scale_factor'
That's the problem description:
I have transportation network (vector layer) saved in non-GIS environment (transport modeling software). Network consists of nodes (points) and polylines (road links). It's done mostly from random backgrounds, regardless any projection, coordinates, etc.
I need to set appropriate projection for the network.
I have accesss to .prj files (if I'm in an say WGS84 projection I can switch to any other projection)
So that's what I'm trying:
I try 7 point Helmert Transformation (http://proj.maptools.org/gen_parms.html). I use towgs84 transformation as a WKT param in .prj file, where I assume that rotation matrix is zero (can I do so?) and I calculate only delta_x, and delta_y, and scale param.
However it will not work. This is my .prj , params in TOWGS84 do not affect transformation:
PROJCS["UTM 17 (WGS84) in northern hemisphere.",
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563],
TOWGS84[0,0,0,0,0,0,100000000000000000000000]],
PRIMEM["Greenwich",0],
UNIT["DMSH",0.0174532925199433],
AXIS["Lat",NORTH],
AXIS["Long",EAST],
PROJECTION["Transverse_Mercator"],
PARAMETER["latitude_of_origin",0],
PARAMETER["central_meridian",0],
PARAMETER["scale_factor",1],
PARAMETER["false_easting",0],
PARAMETER["false_northing",0]]
So I tried to use false_norting and false_easting params, and those work good, and transform my network proprely, BUT:
It will not chcange scale of my network, only position. So how can I rescale my network using .prj file?
Thanks for any hints
Problem solved: both 'scale_factor' , and UNIT['Meter',%scale_factor] works only if datum changes.
Actually comments at the same problem at gis.stackexchange.com/ here brought me to solution.
Anyway: .prj files, Geo Coordinate Systems, proj4js, EPSG etc. are vary weakly documented: no API, no tutorials, no examples, no refernces.
i.e.
1)not any straighforward description of what EPSG database codes are, and which should be chosen.
2)what +proj parameters should I choose to define projection
3)how to create .prj and what are parameters of specific .prj file elements.
awful programming area!