I have been digging through IBM knowledge center and have returned frustrated at the lack of definition to some of their scripting interfaces.
IBM keeps talking about a containment path used in their id definitions and I am only assuming that it corresponds to an xml element within file in the websphere config folder hierachy, but that's only from observation. I haven't found this declared in any documentation as yet.
Also, to find an ID, there is a syntax that needs to be used that will retrieve it, yet I cannot find a reference to the possible values of 'types' used in the AdminConfig.getid(...) function call.
So to summarize I have a couple questions:
What is the correct definition of the id "heirachy" both for fetching an id and for the id itself?
e.g. to get an Id of the server I would say something like: AdminConfig.getid('/Cell:MYCOMPUTERNode01Cell/Node:MYCOMPUTERNode01/Server:server1'), which would give me an id something like: server1(cells/MYCOMPUTERNode01Cell/nodes/MYCOMPUTERNode01/servers/server1|server.xml#server_1873491723429)
What are the possible /type values in retrieving the id from the websphere server?
e.g. in the above example, /Cell, /Node and /Server are examples of type values used in queries.
UPDATE: I have discovered this document that outlines the locations of various configuration files in the configuration directory.
UPDATE: I am starting to think that the configuration files represent complex objects with attributes and nested attributes. not every object with an 'id' is query-able through the AdminConfig.getid(<containment_path>). Object attributes (and this is not attributes in the strict xml sense as 'attributes' could be nested nodes with a simple structure within the parent node) may be queried using AdminConfig.showAttribute(<element_id>, <attribute_name>). This function will either return the string value of the inline attribute of the element itself or a string list representation of the ids of the nested attribute node(s).
UPDATE: I have discovered the AdminConfig.types() function that will display a list of all manipulatible object types in the configuration files and the AdminConfig.parent() function that displays what nodes are considered parents of the specified node.
Note: The AdminConfig.parent() function does not reveal the parents in order of their hierachy, rather it seems to just present a list of parents. e.g. AdminConfig.parent('JDBCProvider') gives us this exact list: 'Cell\nDeployment\nNode\nServer\nServerCluster' and even though Server comes before ServerCluster , running AdminConfig.parent('Server') reveals it's parents as: 'Node\nServerCluster'. Although some elements can have no parents - e.g. Cell - Some elements produce an error when running the parent function - e.g. Deployment.
Due to the apparent lack of a reciprocal AdminConfig.children() function, It appears as though obtaining a full hierachical tree of parents/children lies in calling this function on each of the elements returned from the AdminConfig.parent(<>) call and combining the results. That being said, a trial and error approach is mostly fruitful due to the sometimes obvious ordering.
Related
Example: What is the difference between :
List<UserCompany> findByCompany_IdAndCompany_IsActivated(params)
and
List<UserCompany> findByCompanyIdAndCompanyIsActivated(params)
There is no difference if your model is unambiguous with respect to field names.
List<UserCompany> findByCompanyIdAndCompanyIsActivated(params) -
this first thinks that companyId and companyIsActivated are properties within UserCompany and tries to find them if fails
it then thinks that UserCompany has a field Company - which is another class and Company has field - Id and IsActivated and tries to find them
Where as the below thing
List<UserCompany> findByCompany_IdAndCompany_IsActivated(params)
assumes directly that UserCompany has a field Company - which is another class and Company has field - Id and IsActivated and tries to find them
From the spring documentation
Property expressions :---
Property expressions can refer only to a direct
property of the managed entity, as shown in the preceding example. At
query creation time you already make sure that the parsed property is
a property of the managed domain class. However, you can also define
constraints by traversing nested properties. Assume Persons have
Addresses with ZipCodes. In that case a method name of
List findByAddressZipCode(ZipCode zipCode); creates the
property traversal x.address.zipCode. The resolution algorithm starts
with interpreting the entire part (AddressZipCode) as the property and
checks the domain class for a property with that name (uncapitalized).
If the algorithm succeeds it uses that property. If not, the algorithm
splits up the source at the camel case parts from the right side into
a head and a tail and tries to find the corresponding property, in our
example, AddressZip and Code. If the algorithm finds a property with
that head it takes the tail and continue building the tree down from
there, splitting the tail up in the way just described. If the first
split does not match, the algorithm move the split point to the left
(Address, ZipCode) and continues.
Although this should work for most cases, it is possible for the
algorithm to select the wrong property. Suppose the Person class has
an addressZip property as well. The algorithm would match in the first
split round already and essentially choose the wrong property and
finally fail (as the type of addressZip probably has no code
property). To resolve this ambiguity you can use _ inside your method
name to manually define traversal points. So our method name would end
up like so:
List findByAddress_ZipCode(ZipCode zipCode);
Underscore is reserved character which allows you to point the right object to construct jpa query. It's used only with nested objects. For example if you would like to query by ZipCode inside Address inside you Company object.
More information can be found here
How can I get the output of this generator? out.next() or next(out) doesn't work:
out=nx.tree_all_pairs_lowest_common_ancestor(G)
print(out)
<generator object tree_all_pairs_lowest_common_ancestor at 0x000002BE4EF90D48>
nx.tree_all_pairs_lowest_common_ancestor is only aimed at working on certain graph structures, as mentioned in the docs. In the case no root is specified, as in your case, the function will do as follows:
If root is not specified, find the exactly one node with in degree 0 and
use it. Raise an error if none are found, or more than one is. Also check
for any nodes with in degree larger than 1, which would imply G is not a
tree.
So it is likely that your function either has multiple root nodes, or there are none, i.e your graph is not a tree. So you can either search locally using a Breadth-first search, or specify the root node of the subtree to operate on in nx.tree_all_pairs_lowest_common_ancestor.
Just getting started with using chef recently. I gather that attributes are stored in one large monolithic hash named node that's available for use in your recipes and templates.
There seem to be multiple ways of defining attributes
Directly in the recipe itself
Under an attributes file - e.g. attributes/default.rb
In a JSON object that's passed to the chef-solo call. e.g. chef-solo -j web.json
Given the above 3, I'm curious
Are those all the ways attributes can be defined?
What's the order of precedence here? I'm assuming one of these methods supercedes the others
Is #3 (the JSON method) only valid for chef-solo ?
I see both node and default hashes defined. What's the difference? My best guess is that the default hash defined in attributes/default.rb gets merged into the node hash?
Thanks!
Your last question is probably the easiest to answer. In an attributes file you don't have to type 'node' so that this in attributes/default.rb:
default['foo']['bar']['baz'] = 'qux'
Is exactly the same as this in recipes/whatever.rb:
node.default['foo']['bar']['baz'] = 'qux'
In retrospect having different syntaxes for recipes and attributes is confusing, but this design choice dates back to extremely old versions of Chef.
The -j option is available to chef-client or chef-solo and will both set attributes. Note that these will be 'normal' attributes which are persistent in the node object and are generally not recommended to use. However, the 'run_list', 'chef_environment' and 'tags' on servers are implemented this way. It is generally not recommended to use other 'normal' attributes and to avoid node.normal['foo'] = 'bar' or node.set['foo'] = 'bar' in recipe (or attribute) files. The difference is that if you delete the node.normal line from the recipe the old setting on a node will persist, while if you delete a node.default setting out of a recipe then when your run chef-client on the node that setting will get deleted.
What happens in a chef-client run to make this happen is that at the start of the run the client issues a GET to get its old node document from the server. It then wipes the default, override and automatic(ohai) attributes while keeping the 'normal' attributes. The behavior of the default, override and automatic attributes makes the most sense -- you start over at the start of the run and then construct all the state, if its not in the recipe then you don't see a value there. However, normally the run_list is set on the node and nodes do not (often) manage their own run_list. In order to make the run_list persist it is a normal attribute.
The choice of the word 'normal' is unfortunate, as is the choice of 'node.set' setting 'normal' attributes. While those look like obvious choices to use to set attributes users should avoid using those. Again the problem is that they came first and were and are necessary and required for the run_list. Generally stick with default and override attributes only. And typically you can get most of your work done with default attributes, those should be preferred.
There's a big precedence level picture here:
https://docs.chef.io/attributes.html#attribute-precedence
That's the ultimate source of truth for attribute precedence.
That graph describes all the different ways that attributes can be defined.
The problem with Chef Attributes is that they've grown organically and sprouted many options to try to help out users who painted themselves into a corner. In general you should never need to touch automatic, normal, force_default or force_override levels of attributes. You should also avoid setting attributes in recipe code. You should move setting attributes in recipes to attribute files. What this leaves is these places to set attributes:
in the initial -j argument (sets normal attributes, you should limit using this to setting the run_state, over using this is generally smell)
in the role file as default or override precedence levels (careful with this one though because roles are not versioned and if you touch these attributes a lot you will cause production issues)
in the cookbook attributes file as default or override precedence levels (this is where you should set most of your attributes)
in environment files as default or override precedence levels (can be useful for settings like DNS servers in a datacenter, although you can use roles and/or cookbooks for this as well)
You also can set attributes in recipes, but when you do that you invariably wind up getting your next lesson in the two-phase compile-converge parser that runs through the Chef Recipes. If you have recipes that need to communicate with each other its better to use the node.run_state which is just a hash that doesn't get written as node attributes. You can drop node.run_state[:foo] = 'bar' in one recipe and read it in another. You probably will see recipes that set attributes though so you should be aware of that.
Hope That Helps.
When writing a cookbook, I visualize three levels of attributes:
Default values to converge successfully -- attributes/default.rb
Local testing override values -- JSON or .kitchen.yml (have you tried chef_zero using ChefDK and Kitchen?)
Environment/role override values -- link listed in lamont's answer: https://docs.chef.io/attributes.html#attribute-precedence
I have created a class that I've been using as the storage for all listings in my applications. The class allows me to "sign" an object to a listing (which can be created on the fly via the sign() method like so):
manager.sign(myObject, "someList");
This stores the index of the element (using it's unique id) in the newly created or previously created listing "someList" as well as the object in a 2D array. So for example, I might end up with this:
trace(_indexes["someList"][objectId]); // 0 - the object is the first in this list
trace(_instances["someList"]); // [object MyObject]
The class has another two methods:
find(signature:String):Array
This method returns an array via slice() containing all of the elements signed with the given signature.
findFirst(signature:String):Object
This method just returns the first object in a given listing
So to retrieve myObject I can either go:
trace(find("someList")[0]); or trace(findFirst("someList"));
Finally, there is an unsign() function which will remove an object from a given listing. This function basically:
Stores the result of pop() in the specified listing against a variable.
Uses the stored index to quickly replace the specified object with the pop()'d item.
Deletes the stored index for the specified object and updates the index for the pop()'d item.
Through all this, using unsign() will remove an object extremely quickly from a listing of any size.
Now this is all well and good, but I've had some thoughts which are making me consider how good this really is? I mean being able to easily list, remove and access lists of anything I want throughout the application like this is awesome - but is there a catch?
A couple of starting thoughts I have had are:
So far I haven't implemented support for listings that are private and only accessible via a given class.
Memory - this doesn't seem very memory efficient. Then again, neither is creating arrays for everything I want to store individually either. Just seems.. Larger.. Somehow.
Any insights?
I've uploaded the class here in case the above doesn't make much sense: https://projectavian.com/AviManager.as
Your solution seems pretty solid. If you're looking to modify it to be a bit more extensible and handle rights management, you might consider moving all those individually indexed properties to a value object for your AV elements. You could perform operations like "sign" and "unsign" internally in the VOs, or check for access rights. Your management class could monitor the collection of these VOs, pass them around, perform the method calls, and the objects would hold the state in a bit more readable format.
Really, though, this is entering into a coding style discussion. Your method works and it's not particularly inefficient. Just make sure the code is readable, encapsulated, and extensible and you're good.
Could anyone explain what node means in MySQL?
This one is talking about nested set.
I am reading a document from Codeigniter's wiki and I am not sure what I am supposed to add for $node.
getSubTreeAsHTML (line 704)
Renders the fields for each node starting at the given node
* return: Sample HTML render of tree
string getSubTreeAsHTML (array $node, [array $fields = array()])
* array $node: The node to start with
* array $fields: The fields to display for each node
The code you are looking at is part of BackendPro. The full code of the function can be viewed here.
"Node" in this case refers to a data node of a nested set. Nested sets are nice because they allow you to quickly select a whole branch of a hierarchy with only the starting node.
It looks like the function specified renders part of a tree in HTML starting at the node you specify. The BackendPro interface must render hierarchical data somewhere.
EDIT: Nested sets are not a MySQL-only concept, but there is a good article about nested sets and MySQL here:
http://mikehillyer.com/articles/managing-hierarchical-data-in-mysql/