I have two ACLs defined to filter allowed state-transitions. Something like
1.ACL: state_A
- Match settings:
- Properties
- Ticket
- State: state_A
- Possible:
- Ticket
- State: state_B
2.ACL: state_B
- Match settings:
- Properties
- Ticket
- State: state_B
- Possible:
- Ticket
- State: state_A
Where state_A is of a custom type, and state_B is of type closed. I want to allow really simple state transition when I click on "Edit Note".
If the ACLs are disabled, I can in the Note easily change from one state to another. However, if the first ACL is enabled, which should only allow transition from A to B (and should not influence transitions from B to A) I cannot move from B to A(!).
I tried to search if I am not "blocking" myself in the Generic agent (probably automatically switching from A back to B), but no. To me it does not make any sense. Can it have something to do that I want to change a closed ticket state to some custom ticket state? If that's so, why does it work if I disable the ACLs?
Has anyone had a samilar experience? Any hints what might have gone wrong are welcome. Thanks.
1) for me it's not clear what your use case is and what you try to cover.
2) Normally the ACLs as you describe will only allow to set the ticket state to state_B if the current selected ticket state in UI is state_A. And if you select state_B it only allows to to set state_A anymore. -> So from me both ACLs at same time makes no sense (refer to 1).
3) There is no restriction to regarding ACLs and custom states. It should work as well.
Related
I want to use the Opta Planner configuration with as configuration, by example:
If you have a list of element, like to selection a type of car (In this case the inputs (variables) are brand, tire, motor, color).
We'd have(domains) for
brand: A,B,C,D,E; tire: G,H,I,J,K; motor: M,N,O,P,Q; Color: R,S,T,U,V.
To select a Color R, my domain will restricted by the constraint, and then we'll have:
brand: A,B,C; tire: G,H,I; motor: M,P,Q; Color: R
and, select tire G, and then solution is: brand: B; tire: G; motor: M; Color: R
In this example, I don't want a specific solution for first moment, but I want to propagate in accurs with my requeriments until to the solution. Is it possible with OptaPlanner? and where can I find more about propagate in OptaPlanner?
I try to open example
If I understand your question correctly, you want to restrict the domain of other variables based on the current value of another variable. This is not possible, as that would only try to make the planning entity solve the planning problem itself and interfere with the optimization algorithms.
Something that closely related is "ValueRangeProvider on the Planning Entity" (https://www.optaplanner.org/docs/optaplanner/latest/planner-configuration/planner-configuration.html#valueRangeProviderOnPlanningEntity), where the domain depends on problem facts (which cannot change during solving).
If you want full control on how the domain changes, consider a custom move selector (https://www.optaplanner.org/docs/optaplanner/latest/move-and-neighborhood-selection/move-and-neighborhood-selection.html#customMoves) or a custom solver phase (https://www.optaplanner.org/docs/optaplanner/latest/optimization-algorithms/optimization-algorithms.html#customSolverPhase).
During tests we have many results related to System Issues. How i can move result into my custom defect type "No Data" instead "To investigate"
How i can tell reportportal that next skip related to NoData. Auto Analyze doesn't work with skipped test
System Issues
I guess, this question should has label How i can tell reportportal that next skip/failed item should have custom defect, instead of To_Investigate
By default all fails considered by ReportPortal as To Investigate.
Basically, if failed item received by RP - defect object with defect_type="TO_INVESTIGATE" will be assigned.
as an example, if you use TestNG you can add rp.skipped.issue = false attribute.
rp.skipped.issue = option to mark skipped tests as not 'To Investigate' items on Server side. Boolean values: TRUE - skipped tests considered as issues and will be mark as 'To Investigate'. FALSE - skipped tests will not be mark as 'To Investigate' on portal.
Also API support submitting of custom defect right with failed/skipped item.
So you just need to extend your framework agent, and let him send specific defect type for skips or fails.
Please see implementation in client-java
In few words: you should send issue type with name NOT_ISSUE in case if you don't need To Investigate flag for skipped/any testItem.
https://github.com/reportportal/client-java/commit/36c1624da17694fc2355ab0f628b2f1cc8a35c96#diff-69c3ef7f422402a9c55c68c001df11d1a06d0bd0c1df2d4e2e59406b50c91e2bR317
https://github.com/reportportal/client-java/commit/36c1624da17694fc2355ab0f628b2f1cc8a35c96
Suppose I have a resource called Person. I can update Person entities by doing a POST to /data/Person/{ID}. Suppose for simplicity that a person has three properties, first name, last name, and age.
GET /data/Person/1 yields something like:
{ id: 1, firstName: "John", lastName: "Smith", age: 30 }.
My question is about updates to this person and the semantics of the services that do this. Suppose I wanted to update John, he's now 31. In terms of design approach, I've seen APIs work two ways:
Option 1:
POST /data/Person/1 with { id: 1, age: 31 } does the right thing. Implicitly, any property that isn't mentioned isn't updated.
Option 2:
POST /data/Person/1 with the full object that would have been received by GET -- all properties must be specified, even if many don't change, because the API (in the presence of a missing property) would assume that its proper value is null.
Which option is correct from a recommended design perspective? Option 1 is attractive because it's short and simple, but has the downside of being ambiguous in some cases. Option 2 has you sending a lot of data back and forth even if it's not changing, and doesn't tell the server what's really important about this payload (only the age changed).
Option 1 - updating a subset of the resource - is now formalised in HTTP as the PATCH method. Option 2 - updating the whole resource - is the PUT method.
In real-world scenarios, it's common to want to upload only a subset of the resource. This is better for performance of the request and modularity/diversity of clients.
For that reason, PATCH is now more useful than PUT in a typical API (imo), though you can support both if you want to. There are a few corner cases where a platform may not support PATCH, but I believe they are rare now.
If you do support both, don't just make them interchangeable. The difference with PUT is, if it receives a subset, it should assume the whole thing was uploaded, so should then apply default properties to those that were omitted, or return an error if they are required. Whereas PATCH would just ignore those omitted properties.
In Fiware CEP's User Manual (pdf), page 12, it's mentioned you can create an event Producer of the type 'Timed', that will retrieve events from a file at intervals of time based on their 'OccurranceTime' property.
In my Fi-Lab's intance I don't find this 'Timed' type of producer in the dropdown list, only: File, JMS, Rest and Custom.
So I thought this feature could be implemented in the type 'File', but I can't get it to work, the property 'sendingDelay' in the Producer, always dictates the reading speed, not 'OccurrenceTime' in the event payload. Deleting 'sendingDelay' from the Producer makes it not send events at all.
OccurranceTime is said, in the manual, to be in milliseconds and in the authoring tool it has variable type of 'Date', so "OccurranceTime":"1000" should mean one second.
So, how can I get events produced at desired times? Is it just a matter of correct formating?
(BTW: in the manual OccurranceTime is spelled in two diferent ways: 'OccuranceTime' and 'OccurranceTime'. I believe the correct one is with double 'r', as it's what the authoring tools gives by default when creating a new event.)
Thank you,
Arthur
Event producer of type 'Timed' is a new feature that is part of release 4 of the CEP. It should be available in FIWARE Lab on October.
When available, you could choose it as the producer's type in the CEP Authoring tool. Then, the CEP will read events from an input file. In this file, you will write the expected occurrence time of each event.
For example, if the content of the input event file in JSON format is:
{"Name":"TrafficReport", "volume":"1000", "OccurrenceTime":"1000"}
{"Name":"TrafficReport", "volume":"1600", "OccurrenceTime":"6000"}
{"Name":"TrafficReport", "volume":"2500", "OccurrenceTime":"11000"}
The producer will process the second input event 5 seconds after the first input event, since it said to occur 5000 ms after the first one.
Just getting started with using chef recently. I gather that attributes are stored in one large monolithic hash named node that's available for use in your recipes and templates.
There seem to be multiple ways of defining attributes
Directly in the recipe itself
Under an attributes file - e.g. attributes/default.rb
In a JSON object that's passed to the chef-solo call. e.g. chef-solo -j web.json
Given the above 3, I'm curious
Are those all the ways attributes can be defined?
What's the order of precedence here? I'm assuming one of these methods supercedes the others
Is #3 (the JSON method) only valid for chef-solo ?
I see both node and default hashes defined. What's the difference? My best guess is that the default hash defined in attributes/default.rb gets merged into the node hash?
Thanks!
Your last question is probably the easiest to answer. In an attributes file you don't have to type 'node' so that this in attributes/default.rb:
default['foo']['bar']['baz'] = 'qux'
Is exactly the same as this in recipes/whatever.rb:
node.default['foo']['bar']['baz'] = 'qux'
In retrospect having different syntaxes for recipes and attributes is confusing, but this design choice dates back to extremely old versions of Chef.
The -j option is available to chef-client or chef-solo and will both set attributes. Note that these will be 'normal' attributes which are persistent in the node object and are generally not recommended to use. However, the 'run_list', 'chef_environment' and 'tags' on servers are implemented this way. It is generally not recommended to use other 'normal' attributes and to avoid node.normal['foo'] = 'bar' or node.set['foo'] = 'bar' in recipe (or attribute) files. The difference is that if you delete the node.normal line from the recipe the old setting on a node will persist, while if you delete a node.default setting out of a recipe then when your run chef-client on the node that setting will get deleted.
What happens in a chef-client run to make this happen is that at the start of the run the client issues a GET to get its old node document from the server. It then wipes the default, override and automatic(ohai) attributes while keeping the 'normal' attributes. The behavior of the default, override and automatic attributes makes the most sense -- you start over at the start of the run and then construct all the state, if its not in the recipe then you don't see a value there. However, normally the run_list is set on the node and nodes do not (often) manage their own run_list. In order to make the run_list persist it is a normal attribute.
The choice of the word 'normal' is unfortunate, as is the choice of 'node.set' setting 'normal' attributes. While those look like obvious choices to use to set attributes users should avoid using those. Again the problem is that they came first and were and are necessary and required for the run_list. Generally stick with default and override attributes only. And typically you can get most of your work done with default attributes, those should be preferred.
There's a big precedence level picture here:
https://docs.chef.io/attributes.html#attribute-precedence
That's the ultimate source of truth for attribute precedence.
That graph describes all the different ways that attributes can be defined.
The problem with Chef Attributes is that they've grown organically and sprouted many options to try to help out users who painted themselves into a corner. In general you should never need to touch automatic, normal, force_default or force_override levels of attributes. You should also avoid setting attributes in recipe code. You should move setting attributes in recipes to attribute files. What this leaves is these places to set attributes:
in the initial -j argument (sets normal attributes, you should limit using this to setting the run_state, over using this is generally smell)
in the role file as default or override precedence levels (careful with this one though because roles are not versioned and if you touch these attributes a lot you will cause production issues)
in the cookbook attributes file as default or override precedence levels (this is where you should set most of your attributes)
in environment files as default or override precedence levels (can be useful for settings like DNS servers in a datacenter, although you can use roles and/or cookbooks for this as well)
You also can set attributes in recipes, but when you do that you invariably wind up getting your next lesson in the two-phase compile-converge parser that runs through the Chef Recipes. If you have recipes that need to communicate with each other its better to use the node.run_state which is just a hash that doesn't get written as node attributes. You can drop node.run_state[:foo] = 'bar' in one recipe and read it in another. You probably will see recipes that set attributes though so you should be aware of that.
Hope That Helps.
When writing a cookbook, I visualize three levels of attributes:
Default values to converge successfully -- attributes/default.rb
Local testing override values -- JSON or .kitchen.yml (have you tried chef_zero using ChefDK and Kitchen?)
Environment/role override values -- link listed in lamont's answer: https://docs.chef.io/attributes.html#attribute-precedence