PyMel duplicate and reparent preserving hierarchy - duplicates

Is there a way to preserve a parented hierarchy when duplicating and reparenting with PyMel?
I have a nested hierarchy of nodes that I want to duplicate and parent under a new group node. My selection of the top node of the hierarchy has hi=True set to ensure I have the entire hierarchy selected. But when I reparent after duplicating, the hierarchy is always lost. eg:
Duplicating this node tree
|root
|->branch
|->leaf
Then parenting it under a new group node ("GRP") yields:
GRP
root
branch
leaf
import pymel.core as pm
root = pm.spaceLocator( n="root" )
branch = pm.spaceLocator( n="branch" )
leaf = pm.spaceLocator( n="leaf" )
pm.parent(leaf, branch)
pm.parent(branch, root)
grp = pm.group( em=True, name='GRP' )
pm.select( clear=True )
pm.select( 'root', hi=True )
root = pm.duplicate()
pm.parent( root, grp )
TIA!

Solved!
Simplest solution I've found: reselect only the top node of the hierarchy before parenting; eg:
pm.select( 'root', hi=False )
pm.parent( 'root', 'grp' )

Related

what is the efficient way to update existing child and append new child object to parent in SQLAlchemy?

What is the efficient way to update existing child model and add new child model to parent model also remove child model?
I just
child_inputs = [{'child_id':1, 'name': 'test1'},{'name': 'test2'}]
parent = Parent.query.get(id)
// remove child model if not exists in child_inputs like child_id 2 appended to Parent model
for inputs in child_inputs:
if 'child_id' in inputs:
child = Child.query.get(inputs['child_id'])
child.name = inputs['name']
else:
child = Child(**inputs)
parent.append(child)
db.session.commit()
I think the documentation of the Merging usage is exactly what you need.
You need to make sure that the 'child_id' is actually the name of the primary key field, or just rename the key before processing through the merge.

Create nodes conditionally when loading nodes from csv in Neo4j

I have a data set in csv format. One of the fields is "elem_type". Based on this type I need to create different types nodes and give to the "columns" of my csv a different name based on the "elem_type" when loading the data using csv load, any way to do that?
My csv has no header and the data look like this:
0, 123, Marco, Ciao
1, 345, Merceds, Car, Yellow
2, 987, Boat, 150cm
Based on the first colmuns that is my "elem_type" i want to load the data and define 3 types of nodes (Person, Car, Boat) and also based on the elem_type define different header
I highly recommend to pre-parse the csv file into separate files for each label. It will make the cypher for import so much easier. In the following I use a little trick by wrapping the CASE command inside a FOREACH:
load csv from "file:///test.csv" as line
foreach (i in case when line[0] = '0' then [1] else [] end |
merge (p:Person {id: line[1]}) set p.name = line[2] )
foreach (i in case when line[0] = '1' then [1] else [] end |
merge (c:Car {id: line[1]}) set c.name = line[2], c.color = line[4] )
foreach (i in case when line[0] = '2' then [1] else [] end |
merge (b:Boat {id: line[1]}) set b.name = line[2] )
Also, don't forget to add indexes on the properties you are merging on.

Cayley : How to put the limit/depth to show the graph hierarchy in cayley?

I need help to limit the nodes to show the graph hierarchy in cayley. Like in OrientDB we have a depth function to restrict the hierarchy to any level up to same level down.
Example: I have a below hierarchy:
A DependsOn B
B RunsOn C
C DependsOn D
D ConnectedTo E
Now, for the above example I have written a below query to show the graph hierarchy.
var path = g.M().Both();
g.V("B").FollowRecursive(path).ForEach( function(v) {
g.V(v.id).OutPredicates().ForEach( function(r){
g.V(v.id).Out().ForEach(function(t){
var node = {
source: v.id,
relation : r.id
target: t.id
}
g.Emit(node)
})
}
})
So, when I am passing B to the query it will return me the complete hierarchy but I want only A ,B & C nodes to show for 1 level hierarchy from B, same thing for 2 level hierarchy I want to show A,B,C & D as it should show 2 level up and 2 level down from the B node.
You can limit the depth by passing the max depth as second parameter to the FollowRecursive function:
g.V("B").FollowRecursive(path,2 )
Please not that you start a new path in the foreach which does not know about the max depth in the outer function.
A more detailed discussing about this use-case can be found at the 'cross-post' on the official Cayley forum:
https://discourse.cayley.io/t/cayley-0-7-0-depth-function-issue/1066

What is good way to import a directory/file structure in Neo4j from CSV file?

I am looking to import a lot of filenames into a graph database, using Neo4j. The data is from an external source and available in CSV file. I'd like to create a tree structure from the data, so that I can easily 'navigate' the structure in queries later on (i.e. find all files underneath a certain directory, all file that occur in multiple directories etc.).
So, given the example input:
/foo/bar/example.txt
/bar/baz/another.csv
/example.txt
/foo/bar/onemore.txt
I'd like the create the following graph:
( / ) <-[:in]- ( foo ) <-[:in]- ( bar ) <-[:in]- ( example.txt )
<-[:in]- ( onemore.txt )
<-[:in]- ( bar ) <-[:in]- ( baz ) <-[:in]- ( another.csv )
<-[:in]- ( example.txt )
(where each node label is actually an attribute, e.g. path:).
I've been able to achieve the desired effect when using a fixed number of directory levels; for example when each file is at three levels deep, I could create a CSV file with 4 columns:
dir_a,dir_b,dir_c,file
foo,bar,baz,example.txt
foo,bar,ban,example.csv
foo,bar,baz,another.txt
And import it using a cypher query:
LOAD CSV WITH HEADERS FROM "file:///sample.csv" AS row
MERGE (dir_a:Path {name: row.dir_a})
MERGE (dir_b:Path {name: row.dir_b}) <-[:in]- (dir_a)
MERGE (dir_c:Path {name: row.dir_c}) <-[:in]- (dir_b)
MERGE (:Path {name: row.file}) <-[:in]- (dir_c)
But I'd like to have a general solution that works for any level of sub-directories (and combinations of levels in one dataset). Note that I am able to pre-process my input if necessary, so I can create any desirable structure in the input CSV file.
I've looked at gists or plugins, but cannot seem to find anything that works. I think/hope that I should be able to do something with the split() function, i.e. use split('/',row.path) to get a list of path elements, but I do not know how to process this list into a chain of MERGE operations.
Here is a first cut at something more generalized.
The premise is that you can split the fully qualified path into components and then use each component of it to split the path so you can struct the fully qualified path for each individual component of the larger path. Use this as the key to merge items on and set the individual component after they are merged. In the case that something is not the root level then find the parent of an individual component and create the relationship back to it. This will break down if there are duplicate component names in a fully qualified path.
First, i started by creating a uniqueness constraint on fq_path
create constraint on (c:Component) assert c.fq_path is unique;
Here is the load statement.
load csv from 'file:///path.csv' as line
with line[0] as line, split(line[0],'/') as path_components
unwind range(0, size(path_components)-1) as idx
with case
when idx = 0 then '/'
else
path_components[idx]
end as component
, case
when idx = 0 then '/'
else
split(line, path_components[idx])[0] + path_components[idx]
end as fq_path
, case
when idx = 0 then
null
when idx = 1 then
'/'
else
substring(split(line, path_components[idx])[0],0,size(split(line, path_components[idx])[0])-1)
end as parent
, case
when idx = 0 then
[]
else
[1]
end as find_parent
merge (new_comp:Component {fq_path: fq_path})
set new_comp.name = component
foreach ( y in find_parent |
merge (theparent:Component {fq_path: parent} )
merge (theparent)<-[:IN]-(new_comp)
)
return *
If you want to differentiate between files and folders here are a few queries you can run afterwards to set another label on the respective nodes.
Find the files and set them as File
// find the last Components in a tree (no inbound IN)
// and set them as Files
match (c:Component)
where not (c)<-[:IN]-(:Component)
set c:File
return c
Find the folders and set them as Folder
// find all Components with an inbound IN
// and set them as Folders
match (c:Component)
where (c)<-[:IN]-(:Component)
set c:Folder
return c

Encoding a binary tree to json

I'm using the sqlalchemy to store a binary tree data in the db:
class Distributor(Base):
__tablename__ = "distributors"
id = Column(Integer, primary_key=True)
upline_id = Column(Integer, ForeignKey('distributors.id'))
left_id = Column(Integer, ForeignKey('distributors.id'))
right_id = Column(Integer, ForeignKey('distributors.id'))
how can I generate json "tree" format data like the above listed:
{'id':1,children:[{'id':2, children:[{'id':3, 'id':4}]}]}
I'm guessing you're asking to store the data in a JSON format? Or are you trying to construct JSON from the standard relational data?
If the former, why don't you just create entries like:
{id: XX, parentId: XX, left: XX, right: XX, value: "foo"}
For each of the nodes, and then reconstruct the tree manually from the entries? Just start form the head (parentId == null) and then assemble the branches.
You could also add an additional identifier for the tree itself, in case you have multiple trees in the database. Then you would just query where the treeId was XXX, and then construct the tree from the entries.
I hesitate to provide this answer, because I'm not sure I really understand your the problem you're trying to solve (A binary tree, JSON, sqlalchemy, none of these are problems).
What you can do with this kind of structure is to iterate over each row, adding edges as you go along. You'll start with what is basically a cache of objects; which will eventually become the tree you need.
import collections
idmap = collections.defaultdict(dict)
for distributor in session.query(Distributor):
dist_dict = idmap[distributor.id]
dist_dict['id'] = distributor.id
dist_dict.setdefault('children', [])
if distributor.left_id:
dist_dict.['children'].append(idmap[distributor.left_id])
if distributor.right_id:
dist_dict.['children'].append(idmap[distributor.right_id])
So we've got a big collection of linked up dicts that can represent the tree. We don't know which one is the root, though;
root_dist = session.query(Distributor).filter(Distributor.upline_id == None).one()
json_data = json.dumps(idmap[root_dist.id])