ArcGIS: Create Feature from Coordinate Array - gis

I'm trying to use the Bing Isochrone API to create a feature in ArcGIS...
For now, I have a coordinate array defining a polygon - roughly like [[32.19802, -97.09152],[32.17197, -97.05196],[32.16111, -97.02786],[32.1298, -97.01473],...etc.]
I'd like to take that collection of coordinates and turn it into a feature in ArcGIS... But I'm new enough to this that I'm not sure how to go about it.
If it helps, I'd actually like to provide multiple polygon arrays and use them to create multiple features. I could put the coordinate arrays into a table or something if that would help.
Any ideas?

ArcMap has an option to import X,Y Coordinates, and to convert them into a shape file. Check this link for more info on how to do it : https://support.esri.com/en/technical-article/000012745

Related

Implementing shorted path Algorithm in autodesk forge viewer

I am trying to draw a geometry on my viewer based on shortest path between 2 object.
till now I know how we can draw custom geometry using vector3.
also I have figured out which algorithm I can use to find the shortest path between point A to Point B.
here are few :
Dijkstra's
A* Search
I have seen this example where this algorithm is been implemented and I am trying similar solution in forge viewer here is the Link
also if someone can help me how can I restrict first person view to through walls like in the above sample. right now in forge viewer I can penetrate through wall which I want to avoid or is there any way that I can Identify the walls
Unfortunately the viewer does not provide a lot of support for path-finding, so most of it you would have to do manually.
Here's some of the functionality that is available that might be handy in your case:
you can "shoot rays" inside the scene and compute their intersections with the nearest geometry, for example, using viewer.impl.rayIntersect(ray, ignoreTransparent)
for example, this could be used to detect collisions with walls if you had some kind of an avatar inside the scene
if needed, you can retrieve the geometry of individual objects in the scene using the "fragment list":
let frags = viewer.model.getFragmentList();
let tree = viewer.model.getInstanceTree();
tree.enumNodeFragments(dbid, function (fragid) {
let mesh = frags.getVizmesh(fragid);
// Do something with the mesh...
});

How to use the various Forge Viewer transforms

Below are the various transforms I have found so far using NOP_VIEWER.model.getData().
I'm using the transforms to bring a position into viewer space, and I haven't been able to find any good documentation describing what they all do. My hope here is that this question can help by providing some documentation of the role of these transforms and how/when to use them.
The model originally comes from Revit.
GlobalOffset (Vector3)
placementWithOffset (Matrix4) - seems to be just the inverse of GlobalOffset as a matrix?
placementTransform (Matrix4) - undefined in all models I've tested, I've seen some hints that this is a user defined matrix.
refPointTransform (Matrix4)
Also, there are some transforms in the NOP_VIEWER.model.getData().metadata. These may be Revit specific:
metadata.georeference.positionLL84 (Array[3]) - this is where the model's GPS coords are stored
metadata.georeference.refPointLMV (Array[3]) - no idea what this is, and it has huge and seemingly random values on many models. For example, on my current model it is [-17746143.211481072, -6429345.318822183, 27.360225423452952]
metadata.[custom values].angleToTrueNorth - I guess this is specifying whether the model is aligned to true or magnetic north?
metadata.[custom values].refPointTransform - (Array[12]) - data used to create the refPointTransform matrix above
Can someone help by documenting what these transforms do?
Related: Place a custom object into viewer space using GPS coords
As an alternative solution, the Viewer works with extensions. The Autodesk.Geolocation extension provides a few methods to handle the data structure you mentioned:
Load extension:
let geoExt;
NOP_VIEWER.loadExtension('Autodesk.Geolocation').then((e) => {geoExt = e});
Or get already loaded extension:
let geoExt = NOP_VIEWER.getLoadedExtensions()['Autodesk.Geolocation']
Then use the methods to convert the coordinates
geoExt.lmvToLonLat
geoExt.lonLatToLmv
Here is a quick article on it.
You may .activate() the extension to see additional information on the model geo location.

MapBox: How do I create a featureCollection programmatically?

I want to create clutering in my map. When looking at guides and in the Docs, the FeatureCollection Json is always pulled from some external link. But how do I just create it programmatically as I read data from my server? I don't have it all ready in one place and it will always be changing anyway depends on the user.
I've been stuck with this issue before and ended up using some duck tape solution, but it won't work now. Can anyone please shed some light on this please?
You're able to create a FeatureCollection using an existing Feature object or array/list of Feature objects. This could be turned into a method that you could use to generate a new FeatureCollection whenever you receive a new dataset.
Given the information that you've provided, I am going to have to make some assumptions here - I hope that the following code snippet helps guide you in the right direction:
public FeatureCollection getFeatureCollectionFromCoordinateList(List<Coordinate> coords) {
List<Feature> pointsList = new ArrayList<>();
for (Coordinate coord : coords) {
Feature feature = Feature.fromGeometry(Point.fromLngLat(coord.getLongitude(), coord.getLatitude()));
pointsList.add(feature);
}
return FeatureCollection.fromFeatures(pointsList);
}
In the above example, the object I've used to represent data from the server is called Coordinate which I've given a getLatitude() and getLongitude() method to demonstrate using latitudinal/longitudinal information to generate a Mapbox FeatureCollection from a List of Feature objects which are created using the Feature.fromGeometry() method, passing in a Point.fromLngLat().
Please note that this mightn't be the best way to go about what you're trying to achieve here. That said, I hope it illustrates another way in which you can instantiate of FeatureCollection without reading in a JSON data source.

how to draw texture on obj model through optix example

I'm very new to optix and cuda.
I'm trying to modify optix SDK example to present a 3D model with ray tracing. I modified the "progressivePhotonMap" example. Because of lacking of optix/cuda knowledge, I don't know how to draw texture on the 3D model, can anyone who is familiar with SDK example could help me?
I read other draw texture examples like "swimmingShark" or "cook" and try to find out clue to use. However, those examples seem has different way to draw texture.
From now on, I know i have to load texture in cpp file
GeometryInstance instance = m_context->createGeometryInstance( mesh, &m_material, &m_material+1 );
instance["diffuse_map"]->setTextureSampler(loadTexture( m_context, ... );
and create TextureSampler in cuda file
rtTextureSampler<float4, 2> diffuse_map; // Corresponds to OBJ mtl params
,and give them texcoord to draw, like this,
float3 Kd = make_float3( tex2D( diffuse_map, texcoord.x*diffuse_map_scale, texcoord.y*diffuse_map_scale ) );
However, I cannot found where the texcoord get the texture coordinate data in cuda file.
It seems there should be some code like this in .cpp file
GI["texcoord"]->setBuffer(texcoord)
Could anyone teach me where texcoord get the texture coordinate data, and how to match coordinate data and texure to present 3D model with ray tracing?
I can't find tutorial in google, I really need help or direction to reach my goal. Thank you.
You should read up on the OptiX documentation first. Specifically the paragraph regarding Attribute variables.
IIRC the texcoord variable is an attribute of the form
rtDeclareVariable( float3, texcoord, attribute texcoord );
that is computed in the intersection program and passed along to the closest hit program (attributes are designed to pass data from the intersection point to the shading points).
Short answer: it is set into another CUDA function which, conceptually, computes some data needed by that line.

ArcGIS Server - snap a point to a line

If I have a point, and a road network, how do I find the nearest point ON the road? i.e. this is like snapping the point to a line/road.
I am using ArcGis server 9.3 with Java 5 and Oracle 10g. I am using the ST functions and NetworkAnalyst via the java api.
Thanks.
The parts of the network should be created from lines or curves. Therefore, each its feature has to inherit the interface ICurve that implements method queryPointAndDistance(). Using that method and your point you should get nearest points on each feature you want to.
If you want to find out the nearest feature, you have to loop through the collection of the features (e.g. you have traced before) and compare distanceFromCurve parameters for each feature. See JavaDoc: http://resources.esri.com/help/9.3/arcgisengine/java/api/arcobjects/com/esri/arcgis/geometry/ICurve.html.
Use INALocator.queryLocationBypoint(). You can create an NALocator from your NAContext. Pass in the point to the Locator and it will "snap" the point to the road network.
The URL button isn't working. Link to the JavaDoc is http://resources.esri.com/help/9.3/ArcGISengine/java/api/arcobjects/com/esri/arcgis/networkanalyst/INALocator.html#queryLocationByPoint(com.esri.arcgis.geometry.IPoint, com.esri.arcgis.networkanalyst.INALocation[], com.esri.arcgis.geometry.IPoint[], double[])