Is there a shortcut for creating a custom role that is the same as an existing role, minus some operations? - palantir-foundry

https://www.palantir.com/docs/foundry/platform-security-concepts/projects-and-roles/#create-new-role-sets describes how to create custom role sets, and includes an example of a role set containing "Editor without ability to sync."
In general, it seems that "X without Y" is a useful concept ("Viewer without ability to export" is another example); however, it doesn't seem that there is a first-class way of defining a custom role in those "subtractive" terms, so in order to create such a role, it is necessary to manually copy all of the operations except for a small subset from the original role (which also means that any new additions to the original role will need to manually be added in the future, if we want to keep them in-sync).
Is my understanding correct that there is no convenient shortcut for this use-case?

While "subtractive" definition is not supported per se, you can greatly reduce the burden of keeping your custom roles up-to-date by being sure to include "updates" for operation groups that you know will not come to contain operations that you will want to restrict. Although there is still some maintenance burden that remains for other cases, that is a reasonable trade-off for the additional assurance that your role will not come to grant an operation that you did not intend to grant.
Additionally, the time investment of scrolling down the page of operations, observing which operations / operation groups are granted to the default roles, and granting or not-granting to your new role is quite modest (it took me about 15 minutes to create a "Viewer without Download" role using that approach), so it's not extremely troublesome that there isn't a shortcut to make that one-time interaction more efficient.

Related

Sensible way to have a common set of roles as base for multiple playbooks in ansible?

I use ansible to manage a wide range of VMs, all with their own specifics, but each have some roles commonly defined under them.
Eg. multiple playbooks reference a role, that sets up admin users with access, same goes for ssh setup, timesync, timezone etc.
Now these roles are explicitly referenced in the same way in these playbooks, which is hard to maintain if a role happens to change.
I tried two methods:
Include playbooks: While an included playbook can be ran for an inventory file, which would cover all the needed VMs, it still has a separate configuration set, and I would try to avoid possible misconfigurations with included playbooks
Master role with included roles: I managed to make this method work by passing variables, however it is a bit hard to set up, not to mention because of this maintainability and tracability of variable flow defeats the purpose of ease of use.
If anyone more experienced, is there a suggested way to group commonly used roles together while still having the option to use the separately if needed?
I would not want to say that I'm more experienced, but I do have a way of 'making it work for me', and am also struggling with the question if this is the best (or at least a good) way of doing it.
My hosts are divided in groups, and each group has it own set of
variables in group_vars.
I have a single playbook for each top-level
group ("server" and "client").
The roles are split up according to
function ("webserver" or "gnome-desktop").
Whenever a role is used by multiple hosts or groups, I use conditionals based on
inventory_hostname, groups or a custom variable. This does generate
some repetition, but that can be kept minimal by conditionally
importing tasks.
To be completely honest: I'm not there yet, but in a few weekends from now I hope to get there :-).

Single Responsibility Principle Core Understanding

In brushing up on the SRP I read this document which I located via Uncle Bob's page on principles of OOD. I find the following passage puzzling and somewhat at odds with the rest of the document:
"If, on the other hand, the application is not changing in ways that cause the the two responsibilities to change at different times, then there is no need to separate them. Indeed, separating them would smell of Needless Complexity. There is a corollary here. An axis of change is only an axis of change if the changes actually occur. It is not wise to apply the SRP, or any other principle for that matter, if there is no symptom."
While I understand the answer to many software development questions is "it depends" principles like the SRP appear to be almost universally beneficial and to be implemented as a matter of course. The SRP itself affords code a high adaptability to future changes in requirements. Isn't the point to separate out responsibilities from the get-go to avoid struggling with highly coupled code and cascading changes later on?
I would really appreciate some clarification on this to make sure my understanding of this core principle is correct. Thanks in advance!
From my humble understanding, in the Modem example that is presented here, it might be possible that responsibilities of the modem (Connection and Data Exchange) will change as one.
You have two possibilities here :
When the protocol changes, it is possible that only the connection part change, or only the data exchange part change. It this case you should have two interfaces, because a change of protocol in the data does not imply a change of protocol in the connection.
When the protocol changes, it will always change the connection part and the data exchange part. In that case, you don't need two interfaces, because everytime you will have to rewrite the connection part, you are sure that the data exchange will change as well. In that case, you have two responsibilities put on the same change Axis (which is the protocol handled by the modem), so you can leave them inside a single interface.
The key to this statement is "not changing in ways that cause the two responsibilities to change at different times". Let's say for the sake of argument you have a PaymentLogger and a Payment class. Every time you create a new PaymentType (CreditCard, Cash, Paypal, etc) you need to update the PaymentLogger to log actions specific to those Payments. Instead of splitting out a PaymentLogger class you could have Payment class have a method called Log which does whatever is specific for itself.
In this case it could be that the act of recording actions should be build into the class itself since creating a new Payment requires also creating a new PaymentLogger. It's a responsibility that should have been part of Payment all along.

Soft Delete vs. DB Archive

Suggested Reading
Similar: Are soft deletes a good idea?
Good Article: http://weblogs.asp.net/fbouma/archive/2009/02/19/soft-deletes-are-bad-m-kay.aspx
How I ended up here
I strongly belive that when making software, anything done up front to minimize work later on pays off in truck loads. As such, I am trying to make sure when approaching my database schema and maintenance that it can maintain relational integrity while not being archaic or overly complex.
This resulted in a sort of shudder when looking at the typical delete approach, CASCADE. Yikes, a little over the top for my current situation. I wanted to maintain relational graph integrity, but I didn't want to remove every graph just because one part of the chain was irrelevant. Therefore I chose to go the way of soft deleting to make sure data integrity would remain while records could be removed from relevance. I accomplished this by adding a "DateDeleted" field to every, sigh, table in the database.
Turning Point
However, this is clearly starting to add too much complexity and work to be worth it. I am including logic where it should not go and do not feel like perpetuating these bad practices throughout my whole application. In short, I am going to roll back this implementation.
When looking up weather or not people like soft-deleting, it seems there is a lot of support for it. In fact, the linked "Similar" post up top sports a top voted answer of "I always soft-delete". Moreover, the majority of answers there and around SO include some sort of "isDeleted" or "isActive" type of approach.
New Implementation Idea
The "Good Article" linked covers some of the issues I actually began encountering. It also suggests an alternative to soft-deleting which I found spot on from a best practices standpoint. The suggestion is to use an "Archiving Database", which I had actually considered when looking at soft deleting. The reason I decided against it was because of the point I made earlier about CASCADE deleting. I am wary to remove entire graphs from the database because one part of the chain is removed. However, this graph would be able to be retained at least from the archive so I am not sure that it would be really that terrible.
Crossroads
So, should I just keep adding logic, logic, logic....logic? Or, should I consider making the archival database where most of the logic would simply sit in a very complex graph management class to store / restore relational object graphs? The latter seems to be best practice to me.
Soft deleting is definitely an easy approach in theory. However, not really much attention is paid to what to do with the data that wasn't deleted. In fact, it is glossed over.
In my opinion this is because the wrong issue is in focus. Not just "what does deleting mean", but what IS being deleted. When a record is to be removed, what is really being removed is a node in a graph - not just a single record. That whole graph integriy is the reason for people to bandaid over the issue with "soft deletes". These bandaid solutions tend to hide the gangrene underneath - a festering problem which only gets worse with time.
What's worse is that in order to accompany the soft delete logic must be included all over (many times breaking various conventions and implementing anti-patterns) to account for the possible breaks in the object graph. Moreover, what kind of business logic is "isDeleted"?!
I believe a very strong solution to this problem, the problem of removing an object while retaining the referential integrity of the object graph, is to use an archival pattern. On delete of an object, the object is archived then deleted. The archive database, a mirror database with meta data (temporal database design can be used and is very relevant here), would then receive the object to be archived and restored if necessary.
This makes it very direct to avoid listing or including a deleted object as the relevant database will no longer hold it. Now, the same logic which was applied looking for "isDeleted" "isActive" or "DeletedDate" can be applied in the correct place (Not all over the place) to foreign keys of retrieved objects. When a foreign key is present, but the object is not, then there is now a logical explanation and a logical set of options. Display that the containing object was deleted and some course of action: "Restore, Delete Current Containing Object, View Deleted". These options can be either chosen by the user, or explicitly defined in code in a logical manner. Depending on how advanced the archival database is, perhaps more options exist such as who deleted it, when, why, etc. etc.

ways to learn implementing workflow of a software

How many ways are there to learn implementing workflow of a software? What are them?
If you mean the user workflow, how the user is guided through the software...
I usually use some sort of state machine to limit what functionality can be triggered by the user and what information will be presented to the user in a particular state of the workflow. This way I can concentrate on designing each segment of the flow in its own "sandbox" and decision making becomes a lot easier.
If you do not mean user workflow, you can ignore this reply.
Usually you do have steps in workflow. Step consist of some precondition (business logic hidden from UI), some user interaction (user entering some data, and doing some “user stuff”), and post conditions. Usually user interaction part has one or more user chosen “exists”, and every exit consist it’s own post condition (usually every user exit has it’s own business logic depending of a meaning of an exit from a step). Exits navigate workflow to next step. Sometimes you can have fully automatic steps (i.e. using some external data source, calling some web service, important calculation, and so on).
If your workflow is simple, you may implement it as a set of classes representing each step, and configuration of steps order can be put in XML. When your workflow will grow bigger, and bigger, it may be reasonable to search for some workflow engine, (discussion of WF engines is I think beyond the scope of this question).
One important thing – steps can be orthogonal, but it is harder to design. If your steps rely one on another, person configuring workflow and steps order must be fully aware of such dependencies (e.g: user address step will probably depends on user object creation step, and removing user object creation step from a workflow, will result in trying to access nonexistent object).

How do you handle exceptional cases

This is often situation, but here is latest example:
Companies have various contact data (addresses, phone numbers, e-mails...) when they make job ad, they have checkboxes where they choose how they want to be contacted. It is basically descriptive data. User when reading an ad sees something like "You can apply by mail, in person...", except if it's "through web portal" or "by e-mail" because then appropriate buttons should appear. These options are stored in database, and client (owner of the site, not company making an ad) can change them (e.g. they can add "by telepathy" or whatever), yet if they tamper with "e-mail" and "web-portal" options, they screw their web site.
So how should I handle data where everything behaves same way except "this thing" that behaves this way, and "that thing" that behaves some other way, and data itself is live should be editable by client.
You've tagged your question as "language-agnostic", and not all languages cleanly support polymorphism, but that's the way I would approach this.
Each option has some type, and different types require different properties to be set. However, every type supports some sort of "render" method that can display the contact method as needed. Since the properties (phone number, or web address, etc.) are type-specific, you can validate the administrator's input when creating these "objects", to make sure that the necessary data is provided and valid. Since you implement the render method, rather than spitting out HTML provided by a user, you can ensure that the rendered page is correct. It's less flexible, but safer and more user friendly.
In the database, you can have one sparsely populated table that holds data for all types of contacts, or a "parent" table with common properties and sub-tables with type-specific properties. It depends on how many types you have and how different they are. In either case, you would have some sort of type indicator, so that you know the type of object to which the data should be be bound.
First of all, think twice do you really need it. Reason is simple. You are supposed to serve specific need and input data is a mean to provide that service. If data does not fit with existing service then what is its value and who are consumer of that specific information?
There are two possible answers: You are expanding your client base or you need to change existing service because of change of demand. In both cases you need to star from development of business model. If you describe what service you need and what information it should provide you will avoid much of specific data and come with clear requirements easy to implement in software.
I'd recommend the resolution pattern for this, based on the mention of a database. The link above describes it, but it's actually a lot simpler than it sounds. You write a database query that returns all the possible options (for example, you read the standard options and the customized options together using perhaps a UNION or a JOIN depending on your schema) - the COALESCE SQL keyword is then useful to find the first 'resolution' of the option value that isn't NULL.
Well, if all it is is that you have two options that are special, and then anything else is dealt with in the same way, then store your options as strings, and if either of the two special ones appears in that list, then show the appropriate stuff for that special item.
Just check your list of items for the two special ones. Nothing fancy.
By writing a very simple Rules Engine. You can use an out-of-the box implementation, or you can roll your own. Since your case seems so simple, I tend to roll my own, because it means less dependencies (YMMV).