where is the cut plane for Forge levels extension? - autodesk-forge

for Forge Autodesk.AEC.LevelsExtension, where is the cut plane? looks like it doesn't follow the view range in Revit?

Appended to what Cyrille mentioned.
Revit's AEC model data will be generated automatically during the Forge translation. The level info is dumping from Revit Level elements, e.g., the name, guid, elevations, extensions of the Revit Level which you can see with Revit API. The Levels extension just uses it to rebuild level ranges.
For example, the cutting range of level 1 will be from level1's elevation to (level2's elevation - a height adjustment) with my research. The height adjustment is to avoid cutting on top of floors to brining a better view. So, it's not following Revit's view range of the floor plan view.
If the levels extension doesn't match your need, you may check out my level section tool. This sample demonstrates the concept of how to create cut planes by levels.

In this extension, floors cut planes are defined based on the bounding box of the floor node in the scene instance tree. Floors are defined in an additional AEC data json file which contains additional information.

Related

Generating 2d views from cut planes automatically

i'm new to the autodesk viewer and the others api and I could use some help figuring out what tools are the best to do what I want.
I'm using the autodesk viewer to let users generate 2d views of cut planes, in order to do this I simply use the getScreenshot function from the viewer and save it as a Blueprint in my app.
What I would like to do now is that when the user updates his 3d model, to automatically update my 2d views with the new 3d model.
Currently the only solution I came up with is to store the position of the camera when taking the screenshot and then when the 3d model is updated, have another computer in the background go in the viewer and take the screenshots again at the same location.
This does not seems to be a very elegant solution so I would like to know if there's an alternative, like a way to generate 2d views from an api call or maybe use the Design Automation API with the viewer to take the screenshots ?
Another thing i'm struggling with is getting precise measure of the 2ds views i'm generating, my current solution is to calculate the distance between the camera and the cut plane and then use the fov to get an approximate measure, the formula looks like this :
Math.tan((viewer.getCamera().fov / 2) * Math.PI / 180) * distanceBetweenCameraAndPlaneCut * 4;
but it is very dependent on the user facing the plane cut at a 90° angle and i'm thinking there should be something better to do with the measure tool.
Thanks a lot for your time!
You should be able to run the Viewer on a server using puppeteer (without client-side components) to generate screenshots: https://forge.autodesk.com/blog/running-forge-viewer-headless-chrome-puppeteer
Note: you could also use Design Automation API in order to do something similar, but then you are limited to the file formats the given product (e.g. AutoCAD) supports as input
You could also simply save the state of the Viewer and reset it next time you load the same model in order to take a screenshot of the exact same area using getState()/restoreState(): https://adndevblog.typepad.com/cloud_and_mobile/2015/02/managing-viewer-states-from-the-api.html
Why are you trying to measure the distance between cut plane and camera position? Is that in order to restore the Viewer state/camera? If so, then the solution mentioned in 2. should help

How to get Autodesk Viewer LayerManager to RestoreState properly

I've encountered a bug in the Autodesk Viewer LayerManager extension that breaks the restoreState functionality. I am saving the state of a multilayer DWG file using getState and re-applying that state using restoreState. When i restore state, most or all of the layers are hidden, even if they weren't when i saved the state.
It looks like this is an issue with how the state is being saved and interpreted. I dug into the state JSON and found the list of visible layers (state.objectSet[0].isolated) in this form:
["0","1","2","3","4","5"]
After some experimenting i found out that the LayerManager is expecting either the integer indices of the layers or the string names of the layers. Something like:
[0,1,2,3,4,5]
or
["layer0","layer1","layer2","layer3","layer4","layer5"]
(assuming those are the names of each layer)
So the current implementation breaks because it looks for layers with the names "0", "1", "2", etc. no matter what the actual layer names are.
I am wondering if there is a way to fix or work around this. A temporary solution is to parse the state JSON and cast the layer numbers to integers but that is a bit of a hack.
This is a known issue and is currently being looked into by our Engineering. Can stay tuned to our Forge Blog and look out for the releases notes to keep tabs on a fix.
In the meantime as quick workaround you can programmatically reveal all the layers once all graphics are loaded:
viewer.addEventListener(Autodesk.Viewing.GEOMETRY_LOADED_EVENT, ()=>viewer.showAll())

Is it possible to insert multiple blocks into a drawing using the Forge Design Automation API?

As the title says I am looking to use the Design Automation API to upload an archive of blocks, then insert them all into a drawing with a border.
The positioning of the blocks does not matter they just need to be inserted into the drawing ready to be arranged by an engineer.
Any help or advice on end points or limitations would be great thanks.
This is doable. You need to develop a custom activity with 2 inputs and one output. input#1 is the base drawing with boarder and input#2 is a zip file with the blocks. When you submit your workitem you will mark the second input argument as ResourceKind = ResourceKind.ZipPackage this will tell the service that it should unzip the file into the folder designated by LocalFileName. Then your script can enumerate the files in the folder (see vl-directory-files) and issue the INSERT command.

creating and run your own algorithms on localized map

So here is my problem. I plan to implement a localized map for my college presenting all the locations such as main block, Tech park etc. Not only do i plan to develop a GUI but also I also want to run my own algorithms, such as finding the quickest route from one block to another etc (Note: the algorithm is something i will be writing since i don't want to take the shortest route as the quickest but want to add my own parameters as weights). I want to host the map locally (say on a in house system) and should be able to cater real time request (displaying route to the nearest cafeteria) and display current data (such as what event is taking place in what corner of the campus). I know Google Maps API or Openstreetmap/OpenLyers API will enable me to build my own map, but can i run my own algorithms on them? also can I add elements that i have created and replace the traditional building/office components with my own?
You can do the following :
1. Export a part of open street map from their website. (go to the export tab)
2. Use ElementTree in python to parse the exported the xml data.
3. Use networkx to add the parsed data into a graph.
4. Run your algorithms on it.

How to extract data from an embed Raphael dataset to CSV?

Attempting to extract the data from this Google Politics Insights webpage from "Jan-2012 to the Present" for Mitt Romney and Barack Obama for the following datasets:
Search Trends Based on volume
Google News Mentions Mentions in articles and blog posts
YouTube Video Views Views from candidate channels
For visual example, here's what I mean:
Using Firebug I was able to figure out the data is stored in a format readable by Raphael 2.1.0; looked at the dataset and nothing strikes me as a simple way to convert the data to CSV.
How do I convert the data per chart per presidential candidate into a CSV that has a table for "Search Trends", "Google News Mentions", and "YouTube Video Views" broken down by the smallest increment of time with the results measured in the graph are set to a value of "0.0 to 1.0"? (Note: The reason for "0.0 to 1.0" is the graphs do not appear to give volume info, so the volume is relative to the height of the graph itself.)
Alternatively, if there's another source for all three datasets in CSV, that would work too.
First thing to do is to find out where the data comes from, so I looked up the network traffic in my developer console, and found it very soon: The data is stored as json here.
Now you've got plenty of data for each candidate. I don't know exactly in what relation these numbers are but they definitely are used for their calulation in the graph. I found out that the position in the main.js is on line 392 where they calculate the data with this expression:
Math.log(dataPoints[i][j] * 100.0) / Math.log(logScaleBase);
My guess is: Without the logarithm and a bit exponential calculation you should get the right results.