Saving node coordinates (layout) in igraph - igraph

I have a calculated graph layout of a large graph. I would like to load this graph, as well as its layout (coordinates) in Gephi. Is there a file format that is supported by both igraph and Gephy that allows coordinates specification?

GraphML should be okay - just assign the x and y coordinates of the nodes as two vertex attributes (say, x and y) in igraph and then load the graph in Gephi. I'm pretty sure Gephi provides a way to use numeric vertex attributes as coordinates for a layout.

Is there a file format that is supported by both igraph and Gephy that allows coordinates specification?
Any format that supports node-attributes.
I would like to load this graph, as well as its layout (coordinates) in Gephi.
Placing nodes corresponding to their coordinates is possible using the GeoLayout plugin.

Related

Do Forge Viewer SVF pack files use parent-child linked transforms?

Context: I've been extracting geometry data from the Forge SVF structures into an OBJ format using the Forge Extract code by Petr. These data are then transparently sent to a different rendering system for the project upon which I'm working. However, I'm noticing that there are incorrect rotations in groups of extracted objects. Not all objects, just groupings.
As an example, here is the Forge Viewer rendering of a group of objects (the long poles), with correct rotation. You can see all the poles evenly placed along the base-plate's edge and equally placed with regards to each other.
Whereas in the rendered extracted geometry, the grouping of objects are correctly placed with relation to each other (equally, 3x3), but the group as a whole is rotated slightly along the Z-axis in relation to the bottom plate.
This is the type of behaviour I would expect if the individual poles were all child objects of some parent object (perhaps an invisible grouping object), and the rotation of the parent would pivot all the poles in the SVF but that rotation wasn't applied during geometry extraction.
This happens with all groupings with regards to individual objects in a scene.
While looking at this question, I get the strong impression that there is a 2nd rotational aspect but I cannot see how that applies when reading the SVF directly.
Question:
Obviously I'm not looking for a direct code solution, but to confirm the structure of the SVF pack files. Looking at the extraction, I don't see anything which would imply a parent-child grouping but haven't managed to think of an alternative cause.
So, are there such parent-child transform relationships in the SVF pack files, or a global rotational component which only applies to certain objects? If so, where is that placed within the pack file. And if not, what else could cause this type of systematic rotation of groups?
The SVF file format doesn't use parent-child transforms - all fragment transforms are basically world transforms. It's possible that my code for parsing the fragment transforms handles one of the transform types incorrectly. I'd try debugging the getTransform method for the dbId of the base or one of the poles, and compare the transform with the one parsed by Forge Viewer.
Also, I'm wondering if it's the base that's slightly off, and not the 3x3 poles?

In keras(deep learning library), sush custom embedding layer possible?

I just moved in recently from theano, lasagne to keras.
When I in theano, I used such custom embedding layer.
How to keep the weight value to zero in a particular location using theano or lasagne?
It' was useful when deal of variable length input by adding padding.
In keras, such custom embedding layer possible?
Then, how can I make it?
And, such embedding layer may be wrong?
This may not be exactly what you want, but the solution I personally use as it is used in Keras examples (e.g. this one) is to pad the data to a constant length before feeding it to network.
Keras itself provide this pre-processing tool for sequences in keras.preprocessing.sequence.pad_sequences(seq, length)

neo4j graph to JSON

I'm considering different options to use Neo4J graph and display it all on the web, at the moment I am considering a Java based reader of the database that creates JSON output for display by the web.
Is JSON suitable for display tree-like structures? In my case I have a parent-child(s) style organisation chart.
Could you give me an example if this is possible. Thanks.
Yes. Use JSON to pass over all the geometry: lists of nodes and edges, location and sizes of nodes, spline data for edges, etc. Convert all the data to an SVG DOM, which can be done dynamically. (Hint for HTML 4: make sure you use the SVG namespace in CreateElement.) The real trick, more than anything, is to do all the calculations before the data hits the browser. Simple calculations work fine in JavaScript, but anything complicated is best done elsewhere.
No example, unfortunately. The code I wrote is not available to the public.

How to aggregate points with same value into polygons from a shapefile using GDAL or any other opensource solution

I have a shapefile with around 19,000 points. Its basically export from a raster. Now i need to extract polygons, by aggregating the points which have same value.The field who's value i am going to use for aggregation is dynamically calculated each time using the elevation of points. NOw i need to spit out polygons. How can I do that using GDAL? is there a utility to do it. Any other opensource solutions are welcome.
I have ArcGIS which has a toolbox called 'Aggregate Points' but somehow licence for it is missing.
Here are some possibilities:
You can write a program using GDAL (actually OGR) in C++ or Python (or any other language for which GDAL/OGR provides bindings) and construct Polygon objects from the selection (sub-sets) of your points. Then you can serialise those polygons in to Shapefile or anyother storage supported by OGR.
Alternatively, forget about GDAL/OGR and load your data into PostgreSQL database enabled with PostGIS. Then use PostGIS functionality to construct Polygons
There is example of polygon construction from points based on bruteforce string manipulation and use of geometry constructor posted as postgis-users thread Making a Polygon from Points

what is common GIS method?

I want to know what is the main methods that used in GIS to connect between the location and their information (spatial access methodes) SAM.
I read in some web sites two methods are:
vector
raster
is that methods related what I want???
thank you ^_^
there are indeed in general two types of GIS datasources: Vector and Raster.
With vector; the attribute data can be stored in several ways:
- Side by side in a spatial database
- In a vector file paired with an attribute file (ESRI shape with ESRI dbf)
- Connected in an application (f.i. in MapInfo; connecting points to an excel sheet based upon a common attribute)
With raster; all you have is the numeric value of a pixel.
The process of connecting data to geo-spatial coordinates is called Geocoding.
Other common methods use simple coordinates (Lat/Long or GPS).
Vector/Raster refers to the two main types of graphical data that a GIS might use to render a map.