How to save a related record in the Bizlet preSave that isn't automatically saved - skyve

We have a Skyve document called NNSF - when an NNSF is created (saved) we need to create a record called a PAM which has an aggregation association to the NNSF.
So in the preSave() we have some code to do this:
PAM pam = PAM.newInstance();
// set a bunch of other attributes
...
// set an association to this NNSF
pam.setNnsf(bean);
The PAM record won't be saved automatically because it's not referenced by the NNSF document, so if we don't explicitly call CORE.getPersistence().save(pam) - the PAM is not saved to the DB even though the NNSF is.
If we do call CORE.getPersistence().save(pam) we get an infinite looping of the preSave() because the PAM has an association to this NNSF.
If we call CORE.upsertBeanTuple(pam) instead, we get FK constraint violation as you would expect, because of the association to the NNSF.

There are a few of approaches to this...
Use an inverseMany in NNSF to PAM with cascade true. This establishes bidirectional relationship PAM has 1 NNSF, NNSF has many PAMs. This allows a cascade save from either direction. The downside is that this can become a drag on performance depending on how many PAMs relate to an NNSF.
Use NNSFBizlet.postSave() to affect the save of the PAM during the save of the NNSF. In postSave() the NNSF is already persisted by the time the PAM is saved. You may need to use either a hand-crafted boolean bean property in NSSFExtension class or a boolean document attribute to control whether to save the PAM in postSave of NNSFBizlet to avoid infinite recursion.
You could upsert the PAM in NSSFBizlet in postSave since the flush of the NSSF has occurred by this event but this would only be useful if the PAM document has no further relations (eg associations or collections).

In this case, turns out it was simple - move the code to the postSave() method.
However, in this case I still needed to calculate whether the creation of the related records needs to occur in preSave(), even though I do the save in postSave()
To do this, you could either create a transient boolean attribute (the option I chose) in the document and store Boolean.TRUE during preSave() (the option I chose), or use a variable in the Extension class.

Related

Create or update record for ArchivedUser

I am trying to make a backup table of users, called archived users. It creates the ArchivedUser by taking a hash of the current users attributes (self) and merging in the self.id as the user_id.
When a user is reinstated, their record as an ArchivedUser still remains in the ArchivedUser table. If the user gets deleted a second time, it should update any attributes that have changed.
Currently, it throws a validation error:
Validation failed: User has already been taken, as the self.id already exists in the ArchivedUser table.
What is a better way to handle an object where you update an existing object if possible, or create a new record if it doesn't exist. I am using Rails 4 and have tried find_or_create_by but it throws an error
Mysql2::Error: Unknown column 'device_details.device_app_version'
which is odd, as that column exists in both tables and doesn't get modified.
User Delete Method
# creates ArchivedUser with the exact attributes of the User
# object and merges self.id to fill user_id on ArchivedUser
if ArchivedUser.create!(
self.attributes.merge(user_id: self.id)
)
Thanks for taking a peek!
If your archived_users table is truly acting as a backup for users and not adding any additional functionality, I would ditch the ArchiveUser model and simply add an archived boolean on the User model to tell whether or not the user is archived.
That way you don't have to deal with moving an object to another table and hooking into a delete callback.
However, if your ArchiveUser model does offer some different functionality compared to User, another option would be to use single table inheritence to differentiate the type of user. In this case, you could have User govern all users, and then distinguish between a user being, for example, an ActiveUser or an ArchivedUser.
This takes more setup and can be a bit confusing if you haven't worked with STI, but it can be useful when two similar models need to differ only slightly.
That being said, if you want to keep your current setup, I believe there are a few issues I see with your code:
If you are going to create an object from an existing object, it's good practice to duplicate the object (dup). That way the id won't be automatically set and can be auto-incremented.
If you truly have deleted the User record from the database, there's no reason to store a reference to its id because it's gone. But if you aren't actually deleting the record, you should definitely just use a boolean attribute to determine whether or not the user is active or archived.
I don't have enough context here as to why find_or_create_by isn't working, but if it were the case, then I would keep it as simple as possible. Don't use all the attributes, but just the consistent ones (like id) that you know will return the proper result.
if ArchivedUser.create! # ... is problematic. The bang after create (i.e. create!) will throw an error if the record could not be created, making the if pointless. So, either use if if you don't want errors thrown and want to handle the condition in which the record was not created. Or use create! without if if you do want to throw an error.

Storing unconfirmed and confirmed data to a database

I am creating a web application using Strongloop using a MySQL database connector.
I want it to be possible, that a user can modify data in the application - but that this data will not be 'saved' until a user expressly chooses to save the data.
On the other hand, this is a web application and I don't want to keep the data in the user's session or local storage - I want this data to be immediately persisted so it can be recovered easily if the user loses their session.
To implement it I am thinking of doing the following, but I'm not sure if this is a good idea, or if there is a better way to be doing this.
This is one was I can implement it without doing too much customization on an existing relation:
add an new generated index as the primary key for the table
add a new generated index that represents the item in the row
this would be generated for new items, or set to an old item for edits
add a boolean attribute 'saved'
Data will be written as 'saved=false'. To 'save' the data, the row is marked saved and the old row is deleted. The old row can be looked up by it's key, the second attribute in the row.
The way I was thinking of implementing it is to create a base entity called Saveable. Then every Database entity that extends Saveable will also have the 'Saveable' property.
Saveable has:
A generated id number
A generated non id number - the key for the real object
A 'saved' attribute
I would then put a method in Savable.js to perform the save operation and expose it via the API, and a method to intercept new writes and store them as unsaved.
My question is - is this a reasonable way to achieve what I want?

Implement a Whitelist on Relational Databases

I have a growing web system of around 30,000 users.
There's a few actions on the web system that are disabled for most users. But some trusted clients might be given the privilege to use them.
I already have an object that handles the permissions for each user (get/set). Those permissions are represented in each user database entry as an Integer Field (each bit is a permission).
In my first option I would add a new permission field on my permission manager object for each whitelist that I would want to implement. Then when I would like to know if the current user is on the whitelist, I would call this object and check that permission.
But if would show to the admin a whitelist (display, edit, delete, etc), then I would have to create a permission manager object 30,000 times and test each permission. Which I think is very wasteful.
My second option is to create a new table whitelists and each row would be a different whitelist, then in a TEXT field I would write a comma separated list of users ids. The problem: TEXT VARCHAR or any kind of text field has a character count limit.
I think that the most efficient way is the second option since it's improbable that I would put more than 100 users in the whitelist.
But there's a better implementation?
Maybe I could add a new method to my permission manager object that would use only one query to build lists for a selected permission.

Transactional Replication to different schemas?

I have database A and database B. I would like to do one way replication from A to B.
The only hitch is [A].[dbo].[table] needs to replicate to [B].[someschema].[table]. Is this easy (or possible to do)? The key requirement is that I have real time synch. I do not need to transform the table definition at all in db B.
Short answer yes, you can do this, but not without some effort.
FROM BOOKS ONLINE:
Schemas and Object Ownership
Replication has the following default behavior in the New Publication Wizard with respect to schemas and object ownership:
For articles in merge publications with a compatibility level of 90 or higher, snapshot publications, and transactional publications: by default, the object owner at the Subscriber is the same as the owner of the corresponding object at the Publisher. If the schemas that own objects do not exist at the Subscriber, they are created automatically.
For articles in merge publications with a compatibility level lower than 90: by default, the owner is left blank and is specified as dbo during the creation of the object on the Subscriber.
The object owner can be changed through the Article Properties - dialog box and through the following stored procedures: sp_addarticle, sp_addmergearticle, sp_changearticle, and sp_changemergearticle. For more information, see
http://msdn.microsoft.com/en-us/library/ms151197.aspx

MongoDB Many-to-Many Association

How would you do a many-to-many association with MongoDB?
For example; let's say you have a Users table and a Roles table. Users have many roles, and roles have many users. In SQL land you would create a UserRoles table.
Users:
Id
Name
Roles:
Id
Name
UserRoles:
UserId
RoleId
How is same sort of relationship handled in MongoDB?
Depending on your query needs you can put everything in the user document:
{name:"Joe"
,roles:["Admin","User","Engineer"]
}
To get all the Engineers, use:
db.things.find( { roles : "Engineer" } );
If you want to maintain the roles in separate documents then you can include the document's _id in the roles array instead of the name:
{name:"Joe"
,roles:["4b5783300334000000000aa9","5783300334000000000aa943","6c6793300334001000000006"]
}
and set up the roles like:
{_id:"6c6793300334001000000006"
,rolename:"Engineer"
}
Instead of trying to model according to our years of experience with RDBMS's, I have found it much easier to model document-repository solutions using MongoDB, Redis, and other NoSQL data stores by optimizing for the read use cases, while being considerate of the atomic write operations that need to be supported by the write use cases.
For instance, the uses of a "Users in Roles" domain follow:
Role - Create, Read, Update, Delete, List Users, Add User, Remove User, Clear All Users, Index of User or similar to support "Is User In Role" (operations like a container + its own metadata).
User - Create, Read, Update, Delete (CRUD operations like a free-standing entity)
This can be modeled as the following document templates:
User: { _id: UniqueId, name: string, roles: string[] }
Indexes: unique: [ name ]
Role: { _id: UniqueId, name: string, users: string[] }
Indexes: unique: [ name ]
To support the high frequency uses, such as Role-related features from the User entity, User.Roles is intentionally denormalized, stored on the User as well as Role.Users having duplicate storage.
If it is not readily apparent in the text, but this is the type of thinking that is encouraged when using document repositories.
I hope that this helps bridge the gap with regard to the read side of the operations.
For the write side, what is encouraged is to model according to atomic writes. For instance, if the document structures require acquiring a lock, updating one document, then another, and possibly more documents, then releasing the lock, likely the model has failed. Just because we can build distributed locks doesn't mean that we are supposed to use them.
For the case of the User in Roles model, the operations that stretch our atomic write avoidance of locks is adding or removing a User from a Role. In either case, a successful operation results in both a single User and a single Role document being updated. If something fails, it is easy to perform cleanup. This is the one reason the Unit of Work pattern comes up quite a lot where document repositories are used.
The operation that really stretches our atomic write avoidance of locks is clearing a Role, which would result in many User updates to remove the Role.name from the User.roles array. This operation of clear then is generally discouraged, but if needed can be implemented by ordering the operations:
Get the list of user names from Role.users.
Iterate the user names from step 1, remove the role name from User.roles.
Clear the Role.users.
In the case of an issue, which is most likely to occur within step 2, a rollback is easy as the same set of user names from step 1 can be used to recover or continue.
I've just stumbled upon this question and, although it's an old one, I thought it would be useful to add a couple of possibilities not mentioned in the answers given. Also, things have moved on a bit in the last few years, so it is worth emphasising that SQL and NoSQL are moving closer to each other.
One of the commenters brought up the wise cautionary attitude that “if data is relational, use relational”. However, that comment only makes sense in the relational world, where schemas always come before the application.
RELATIONAL WORLD: Structure data > Write application to get it
NOSQL WORLD: Design application > Structure data accordingly
Even if data is relational, NoSQL is still an option. For example, one-to-many relationships are no problem at all and are widely covered in MongoDB docs
A 2015 SOLUTION TO A 2010 PROBLEM
Since this question was posted, there have been serious attempts at bringing noSQL closer to SQL. The team led by Yannis Papakonstantinou at the University of California (San Diego) have been working on FORWARD, an implementation of SQL++ which could soon be the solution to persistent problems like the one posted here.
At a more practical level, the release of Couchbase 4.0 has meant that, for the first time, you can do native JOINs in NoSQL. They use their own N1QL. This is an example of a JOIN from their tutorials:
SELECT usr.personal_details, orders
FROM users_with_orders usr
USE KEYS "Elinor_33313792"
JOIN orders_with_users orders
ON KEYS ARRAY s.order_id FOR s IN usr.shipped_order_history END
N1QL allows for most if not all SQL operations including aggregration, filtering, etc.
THE NOT-SO-NEW HYBRID SOLUTION
If MongoDB is still the only option, then I'd like to go back to my point that the application should take precedence over the structure of data. None of the answers mention hybrid embedding, whereby most queried data is embedded in the document/object, and references are kept for a minority of cases.
Example: can information (other than role name) wait? could bootstrapping the application be faster by not requesting anything that the user doesn't need yet?
This could be the case if user logs in and s/he needs to see all the options for all the roles s/he belongs to. However, the user is an “Engineer” and options for this role are rarely used. This means the application only needs to show the options for an engineer in case s/he wants to click on them.
This can be achieved with a document which tells the application at the start (1) which roles the user belongs to and (2) where to get information about an event linked to a particular role.
{_id: ObjectID(),
roles: [[“Engineer”, “ObjectId()”],
[“Administrator”, “ObjectId()”]]
}
Or, even better, index the role.name field in the roles collection, and you may not need to embed ObjectID() either.
Another example: is information about ALL the roles requested ALL the time?
It could also be the case that the user logs in to the dashboard and 90% of the time performs tasks linked to the “Engineer” role. Hybrid embedding could be done for that particular role in full and keep references for the rest only.
{_id: ObjectID(),
roles: [{name: “Engineer”,
property1: value1,
property2: value2
},
[“Administrator”, “ObjectId()”]
]
}
Being schemaless is not just a characteristic of NoSQL, it could be an advantage in this case. It's perfectly valid to nest different types of objects in the “Roles” property of an user object.
There are two approaches can be used:
1st approach
Add reference link into user document roles list (array):
{
'_id': ObjectId('312xczc324vdfd4353ds4r32')
user:faizanfareed,
roles : [
{'roleName':'admin', # remove this because when we will be updating some roles name we also need to be update in each user document. If not then ignore this.
roleId: ObjectID('casd324vfdg65765745435v')
},
{'roleName':'engineer',
roleId: ObjectID('casd324vfdvxcv7454rtr35vvvvbre')
},
]
}
And (Base on requirements for queries) we can also add user reference id into role document users list (array):
{
roleName:admin,
users : [{userId: ObjectId('312xczc324vdfd4353ds4r32')}, .......]
}
But adding users id into role document size will be exceeded 16MB which is not good at all. We can use this approach if role document size not exceeded and size of users is bounded. If not required we can add roles id into user docs only.
2nd approach which is traditional
Create new collection in which each document contains id's of both user and role.
{
'_id': ObjectId('mnvctcyu8678hjygtuyoe')
userId: ObjectId('312xczc324vdfd4353ds4r32')
roleId: ObjectID('casd324vfdg65765745435v')
}
Document size will not be exceeded but read operation is not easy in this approach.
Base on requirements go with 1st or 2nd approach.
Final comments on this : Go with 1st approach and add only roleId into user document array because no of roles will not be greater-than users. User document size will not be exceeded 16MB.
in case when employee and company is entity-object
try to use following schema:
employee{
//put your contract to employee
contracts:{ item1, item2, item3,...}
}
company{
//and duplicate it in company
contracts:{ item1, item2, item3,...}
}