I want to extract drawing extents from Civil3D dwg's using Design Automation.
If I use the code below:
static public Extents3d GetExtents(Database db) {
try {
//db.UpdateExt(true);
return new Extents3d(db.Extmin, db.Extmax);
} catch {
return new Extents3d();
}
}
Then I get the following result which is correct:
Min: [2538909.32, 330601.59, 0.00]
Max: [2540326.77, 331107.96, 0.00]
However, if I call db.UpdateExt(true) before or if I simply iterate all entities in model space with the code below, I get a min bound that is at origin:
static public Extents3d GetExtents(Database db) {
try {
var TxMng = db.TransactionManager;
using(var Tx = TxMng.StartTransaction()) {
var btr = Tx.GetObject(
db.CurrentSpaceId, OpenMode.ForRead)
as BlockTableRecord;
foreach(var id in btr) {
var entity = Tx.GetObject(id, OpenMode.ForRead)
as Entity;
extents.AddExtents(entity.GeometricExtents);
}
Tx.Commit();
}
return extents;
} catch {
return new Extents3d();
}
}
Outputs:
Min: [0.00, 0.00, 0.00]
Max: [2540326.77, 331107.96, 0.00]
Also opening the dwg in AutoCAD vanilla and doing a zoom extents will use this huge/invalid extents. So I think Civil knows that some entities should not be included when computing extents or is it something else?
I would like to be able to compute extents using the second approach (at least a modified working version of it) as it offers more granularity over what entities we want to consider if later on we have some more advanced requirements.
When iterating the entities to compute the extends, it is OK we do not include the entity AeccDbNetworkPartConnector. You can use the API entity.GetRXClass().Name to get the name AeccDbNetworkPartConnector, which can help filter out these entities.
Related
I was looking at the sample code for the tutorial at https://forge.autodesk.com/blog/custom-window-selection-forge-viewer-part-iii which is located at https://github.com/Autodesk-Forge/forge-rcdb.nodejs/blob/master/src/client/viewer.components/Viewer.Extensions.Dynamic/Viewing.Extension.SelectionWindow/Viewing.Extension.SelectionWindow.Tool.js as well as the documentation at https://developer.autodesk.com/en/docs/viewer/v2/reference/javascript/toolinterface/ --- Most of these functions are getting called properly in my tool such as handleSingleClick, handleMouseMove, handleKeyDown, and so on, but two of them are not getting hit -- handleButtonDown and handleButtonUp. I was using viewer version 3.3.x but I have updated to use 4.0.x thinking that that might help to resolve the problem, but the same issue occurs in both versions. Thanks for any help.
The following code block from theAutodesk.Viewing.ToolController#__invokeStack(), _toolStack stands for activated tools in the ToolController, the method stands for callback functions started with handle, i.e. handleSingleClick, handleMouseMove, handleKeyDown, handleButtonDown, handleButtonUp, etc.
for( var n = _toolStack.length; --n >= 0; )
{
var tool = _toolStack[n];
if( tool[method] && tool[method](arg1, arg2) )
{
return true;
}
}
Based on my experience, if there is a handle function such as handleButtonDown or handleButtonUp executed before your custom tools' and returned true, then your handles will never be called.
Fortunately, Forge Viewer (v3.2) starts invoking a priority mechanism for custom tools registered in ToolController. ToolController will use the priority number to sort the tools in it, and the priority number of each tool is 0 by default. You can override the priority to make your tools be hit before other tools like this way, to add a function getPriority() to return a number greater than 0:
this.getPriority = function() {
return 100;
};
I found out that when using ES6 and the class syntax, extending your tool from Autodesk.Viewing.ToolInterface will prevent the overrides to work properly, probably because it is not implemented using prototype in the viewer source code.
You can simply create a class and implement the methods that are of interest for your tool:
// KO: not working!
class MyTool extends Autodesk.Viewing.ToolInterface {
getName () {
return 'MyTool'
}
getNames () {
return ['MyTool']
}
handleButtonDown (event, button) {
return false
}
}
// OK
class MyTool {
getName () {
return 'MyTool'
}
getNames () {
return ['MyTool']
}
handleButtonDown (event, button) {
return false
}
}
Consider this element (minimal for the purpose of the question) :
class MyCountDown extends Polymer.Element
{
static get is () { return 'my-count-down'; }
static get properties ()
{
return {
time: { /* time in seconds */
type: Number,
observer: '_startCountDown'
},
remains: Number
}
}
_startCountDown ()
{
this.remains = this.time;
this.tickInterval = window.setInterval(() => {
this.remains--;
if (this.remains == 0) {
console.log('countdown!');
this._stopCountDown();
}
}, 1000);
}
_stopCountDown () {
if (this.tickInterval) {
window.clearInterval(this.tickInterval);
}
}
}
customElements.define(MyCountDown.is, MyCountDown);
If I get one instance and set the property time,
let MyCountDown = customElements.get('my-count-down');
let cd = new MyCountDown();
cd.time = 5;
the property time changes but the observer and the _startCountDown() function is not called. I believe Polymer is waiting for the Instance to be attached to the DOM because in fact when I appendChild() this element to the document the count down starts and after 5 seconds the console logs 'countdown!' as expected.
My goal is to execute this lifecycle without attaching anything to the document because the instances of MyCountDown are not always attached to the view but/and they need to be live-code between the different components of my web application.
One solution is to attach the new MyCountDown instances to an hidden element of the dom to force the Polymer lifecycle but I think this is not so intuitive.
I don't know the exact place to call, but the problem you have is that the property assessors are not in place.
I think you might get a clue from this talk https://www.youtube.com/watch?v=assSM3rlvZ8 at google i/o
call this._enableProperties() in a constructor callback?
I am using Places.GeoDataApi for Android and I get different search results depending on the location of the device performing the request. I need the results to be consistently located inside the bounds. I don't see where that could be setup in the getAutocompletePredictions request. Is there anything I am missing?
I get address/place autocomplete suggestions using the GoogleApiClient and Places API through:
Places.GeoDataApi.getAutocompletePredictions()
The method requires a GoogleApiClient object, a String to autocomplete, and a LatLngBounds object to limit the search range. This is what my usage looks like:
LatLngBounds bounds = new LatLngBounds(new LatLng(38.46572222050097, -107.75668023304138),new LatLng(39.913037779499035, -105.88929176695862));
GoogleApiClient mGoogleApiClient = new GoogleApiClient.Builder(this)
.addConnectionCallbacks(this)
.addOnConnectionFailedListener(this)
.addApi(LocationServices.API)
.addApi(Places.GEO_DATA_API)
.build();
PendingResult<AutocompletePredictionBuffer> results =
Places.GeoDataApi.getAutocompletePredictions(googleApiClient, "Starbucks", bounds, null);
Version in use: com.google.android.gms:play-services-location:8.3.0
Documentation:
https://developers.google.com/places/android/autocomplete
Good news. As of April 2018 Google added possibility to specify how to treat the bounds in autocomplete predictions. Now you can use the getAutocompletePredictions() method of GeoDataClient class with boundsMode parameter
public Task<AutocompletePredictionBufferResponse> getAutocompletePredictions (String query, LatLngBounds bounds, int boundsMode, AutocompleteFilter filter)
boundsMode - How to treat the bounds parameter. When set to STRICT predictions are contained by the supplied bounds. If set to BIAS predictions are biased towards the supplied bounds. If bounds is null then this parameter has no effect.
source: https://developers.google.com/android/reference/com/google/android/gms/location/places/GeoDataClient
You can modify your code to something similar to:
LatLngBounds bounds = new LatLngBounds(new LatLng(38.46572222050097, -107.75668023304138),new LatLng(39.913037779499035, -105.88929176695862));
GeoDataClient mGeoDataClient = Places.getGeoDataClient(getBaseContext());;
Task<AutocompletePredictionBufferResponse> results =
mGeoDataClient.getAutocompletePredictions("Starbucks", bounds, GeoDataClient.BoundsMode.STRICT, null);
try {
Tasks.await(results, 60, TimeUnit.SECONDS);
} catch (ExecutionException | InterruptedException | TimeoutException e) {
e.printStackTrace();
}
try {
AutocompletePredictionBufferResponse autocompletePredictions = results.getResult();
Log.i(TAG, "Query completed. Received " + autocompletePredictions.getCount()
+ " predictions.");
// Freeze the results immutable representation that can be stored safely.
ArrayList<AutocompletePrediction> al = DataBufferUtils.freezeAndClose(autocompletePredictions);
for (AutocompletePrediction p : al) {
CharSequence cs = p.getFullText(new CharacterStyle() {
#Override
public void updateDrawState(TextPaint tp) {
}
});
Log.i(TAG, cs.toString());
}
} catch (RuntimeExecutionException e) {
// If the query did not complete successfully return null
Log.e(TAG, "Error getting autocomplete prediction API call", e);
}
I hope this helps!
I got the same problem.
Unfortunately, there is no way to get places in specific bounds using Google Places API Android.
However, you can still use Nearby Search using Google Places API Web Service.
Documentation here :
https://developers.google.com/places/web-service/search?hl=fr
You should then be able to set the bounds in parameters, and get the places inside the bounds from the JSON response, as explained in this answer :
https://stackoverflow.com/a/32404701/5446285
---Background---
I'm trying to learn current best data management practices and as far as I can tell that means stateless / stateful components and immutable data structure. I'm having problems with implementing latter (immutables). I'm trying to incorporate it into angular 2 without redux. Redux is on my list of things to learn but for now I want to use immutable.js without redux.
---The problem---
How do I create a copy of an array in a service and return it on demand? I have this example code (just for illustration purposes, I haven't tested it!):
import { Product } from './product';
import { Immutable } from './immutable';
export class ProductListService {
let id = 0;
const cheese = new Product(id++, 'cheese');
const ham = new Product(id++, 'ham');
const milk = new Product(id++, 'milk');
// I fill the list with some sample data
let oldProductList = Immutable.List.of(cheese, ham, milk);
let newProductList = [];
let returnProductList = oldProductList;
getProductList() {
return returnProductList;
}
addProduct() {
// As far as I know, this creates a deep immutable copy
newProductList = oldProductList.withMutations(function (list) {
list.push(new Product(id++, 'name'););
});
returnProductList = newProductList;
oldProductList = newProductList;
}
}
The above code is based on the example from the official docs where they just add a number to the variable each time they create a copy (I understand that is only for example purposes?). How do I go about creating new lists without using numbers? Do I use oldList / newList? Do I dynamically create new numbers for new variables so that I have a history of objects?
I feel I'm doing something wrong on a architectural level here. What is the correct approach? All immutable.js examples are using redux or show no real-life example, does someone know of a good material to learn about immutalbe.js (+ possible ng2?)
Thanks
Not sure i fully understand what you want to do here,
but consider this:if you just want to push one element to the list you should not use withMutations.
let list1 = Immutable.List(['one'])
let list2 = list1.push('two')
console.log(list1.toJS()) // ['one']
console.log(list2.toJS()) // ['one', 'two']
Applying a mutation to create a new immutable object results in some overhead, which can add up to a minor performance penalty. use withMutations only If you need to apply a series of mutations locally before returning
let list1 = Immutable.List(['one'])
var list2 = list1.withMutations(function (list) {
list.push('two').push('three').push('four').push('five');
});
console.log(list1.toJS()) //["one"]
console.log(list2.toJS()) //["one", "two", "three", "four", "five"]
here we create a temporary mutable (transient) copy of list 1 and apply a batch of mutations in a performant manner by using withMutations
I hope that answers your question
I have a JSON file which I need to iterate over, as shown below...
{
"device_id": "8020",
"data": [{
"Timestamp": "04-29-11 05:22:39 pm",
"Start_Value": 0.02,
"Abstract": 18.60,
"Editor": 65.20
}, {
"Timestamp": "04-29-11 04:22:39 pm",
"End_Value": 22.22,
"Text": 8.65,
"Common": 1.10,
"Editable": "true",
"Insert": 6.0
}]
}
The keys in data will not always be the same (i've just used examples, there are 20 different keys), and as such, I cannot set up my script to statically reference them to get the values.
Otherwise I could state
var value1 = json.data.Timestamp;
var value2 = json.data.Start_Value;
var value3 = json.data.Abstract;
etc
In the past i've used a simple foreach loop on the data node...
foreach ($json->data as $key => $val) {
switch($key) {
case 'Timestamp':
//do this;
case: 'Start_Value':
//do this
}
}
But don't want to block the script. Any ideas?
You can iterate through JavaScript objects this way:
for(var attributename in myobject){
console.log(attributename+": "+myobject[attributename]);
}
myobject could be your json.data
I would recommend taking advantage of the fact that nodeJS will always be ES5. Remember this isn't the browser folks you can depend on the language's implementation on being stable. That said I would recommend against ever using a for-in loop in nodeJS, unless you really want to do deep recursion up the prototype chain. For simple, traditional looping I would recommend making good use of Object.keys method, in ES5. If you view the following JSPerf test, especially if you use Chrome (since it has the same engine as nodeJS), you will get a rough idea of how much more performant using this method is than using a for-in loop (roughly 10 times faster). Here's a sample of the code:
var keys = Object.keys( obj );
for( var i = 0,length = keys.length; i < length; i++ ) {
obj[ keys[ i ] ];
}
You may also want to use hasOwnProperty in the loop.
for (var prop in obj) {
if (obj.hasOwnProperty(prop)) {
switch (prop) {
// obj[prop] has the value
}
}
}
node.js is single-threaded which means your script will block whether you want it or not. Remember that V8 (Google's Javascript engine that node.js uses) compiles Javascript into machine code which means that most basic operations are really fast and looping through an object with 100 keys would probably take a couple of nanoseconds?
However, if you do a lot more inside the loop and you don't want it to block right now, you could do something like this
switch (prop) {
case 'Timestamp':
setTimeout(function() { ... }, 5);
break;
case 'Start_Value':
setTimeout(function() { ... }, 10);
break;
}
If your loop is doing some very CPU intensive work, you will need to spawn a child process to do that work or use web workers.
If you want to avoid blocking, which is only necessary for very large loops, then wrap the contents of your loop in a function called like this: process.nextTick(function(){<contents of loop>}), which will defer execution until the next tick, giving an opportunity for pending calls from other asynchronous functions to be processed.
My most preferred way is,
var objectKeysArray = Object.keys(yourJsonObj)
objectKeysArray.forEach(function(objKey) {
var objValue = yourJsonObj[objKey]
})
If we are using nodeJS, we should definitely take advantage of different libraries it provides. Inbuilt functions like each(), map(), reduce() and many more from underscoreJS reduces our efforts. Here's a sample
var _=require("underscore");
var fs=require("fs");
var jsonObject=JSON.parse(fs.readFileSync('YourJson.json', 'utf8'));
_.map( jsonObject, function(content) {
_.map(content,function(data){
if(data.Timestamp)
console.log(data.Timestamp)
})
})
A little late but I believe some further clarification is given below.
You can iterate through a JSON array with a simple loop as well, like:
for(var i = 0; i < jsonArray.length; i++)
{
console.log(jsonArray[i].attributename);
}
If you have a JSON object and you want to loop through all of its inner objects, then you first need to get all the keys in an array and loop through the keys to retrieve objects using the key names, like:
var keys = Object.keys(jsonObject);
for(var i = 0; i < keys.length; i++)
{
var key = keys[i];
console.log(jsonObject.key.attributename);
}
Not sure if it helps, but it looks like there might be a library for async iteration in node hosted here:https://github.com/caolan/async
Async is a utility module which provides straight-forward, powerful functions for working with asynchronous JavaScript. Although originally designed for use with node.js, it can also be used directly in the browser.
Async provides around 20 functions that include the usual 'functional' suspects (map, reduce, filter, forEach…) as well as some common patterns for asynchronous control flow (parallel, series, waterfall…). All these functions assume you follow the node.js convention of providing a single callback as the last argument of your async function.
Take a look at Traverse. It will recursively walk an object tree for you and at every node you have a number of different objects you can access - key of current node, value of current node, parent of current node, full key path of current node, etc. https://github.com/substack/js-traverse. I've used it to good effect on objects that I wanted to scrub circular references to and when I need to do a deep clone while transforming various data bits. Here's some code pulled form their samples to give you a flavor of what it can do.
var id = 54;
var callbacks = {};
var obj = { moo : function () {}, foo : [2,3,4, function () {}] };
var scrubbed = traverse(obj).map(function (x) {
if (typeof x === 'function') {
callbacks[id] = { id : id, f : x, path : this.path };
this.update('[Function]');
id++;
}
});
console.dir(scrubbed);
console.dir(callbacks);