In before_flush event I have to do the following actions:
Issue some select queries for check that I can do some actions
Delete some objects from collection B
Check that in collection B there are not objects with some criteria (2)
Add to collection B a list of objects
The problem is that when at point 3 I do select query it still return the objects which I marked as deleted at point 2. But I want an empty list from it. :)
All this actions I have to do inside one transaction. How can I do this?
Related
I'm new to the world of Low Code app development, and so far I'm pulling my hair out.
I'm using a third party web app to submit JSON formatted data to Zapier via webhook, and then submit that to Backendless with codeless API that creates a record. I'm running into two issues that I can't figure out how to solve.
Backendless record creation with foreign key relationship. I'm creating a record in Table A, but that needs to have a relationship to Table B. I have it set up as such in Backendless, but in Zapier I don't see an option to populate the table_b_id in the Table A record I'm creating. What am I missing here?
After creating the Table A record, I want to create multiple records in Table C that are children of the Table A record. How on earth do I do this? With Python + SQL, I could do it in 2 minutes, but for the life of me I can't figure out how to do it the LowCode way using either Zapier or Backendless.
Any help would be appreciated! I'm totally stumped.
Backendless actions for Zapier let you save/update/delete an object in a single table. These are distinct API operations. Creating a relationship is a separate API call that doesn't have a corresponding action in Zapier's Backendless integration. However, you can create a relation between the object you're saving and a related "parent" (or "child") table using an API event handler in business logic. It can be done with Java, JS or Codeless. The event handler you'd be creating is afterSave.
You can save multiple objects with a single call using Codeless. The simplest way to do this by using the Bulk Create block: https://prnt.sc/x6cwp4. The objects connector should be a list of objects to save in the table.
I am trying to make a backup table of users, called archived users. It creates the ArchivedUser by taking a hash of the current users attributes (self) and merging in the self.id as the user_id.
When a user is reinstated, their record as an ArchivedUser still remains in the ArchivedUser table. If the user gets deleted a second time, it should update any attributes that have changed.
Currently, it throws a validation error:
Validation failed: User has already been taken, as the self.id already exists in the ArchivedUser table.
What is a better way to handle an object where you update an existing object if possible, or create a new record if it doesn't exist. I am using Rails 4 and have tried find_or_create_by but it throws an error
Mysql2::Error: Unknown column 'device_details.device_app_version'
which is odd, as that column exists in both tables and doesn't get modified.
User Delete Method
# creates ArchivedUser with the exact attributes of the User
# object and merges self.id to fill user_id on ArchivedUser
if ArchivedUser.create!(
self.attributes.merge(user_id: self.id)
)
Thanks for taking a peek!
If your archived_users table is truly acting as a backup for users and not adding any additional functionality, I would ditch the ArchiveUser model and simply add an archived boolean on the User model to tell whether or not the user is archived.
That way you don't have to deal with moving an object to another table and hooking into a delete callback.
However, if your ArchiveUser model does offer some different functionality compared to User, another option would be to use single table inheritence to differentiate the type of user. In this case, you could have User govern all users, and then distinguish between a user being, for example, an ActiveUser or an ArchivedUser.
This takes more setup and can be a bit confusing if you haven't worked with STI, but it can be useful when two similar models need to differ only slightly.
That being said, if you want to keep your current setup, I believe there are a few issues I see with your code:
If you are going to create an object from an existing object, it's good practice to duplicate the object (dup). That way the id won't be automatically set and can be auto-incremented.
If you truly have deleted the User record from the database, there's no reason to store a reference to its id because it's gone. But if you aren't actually deleting the record, you should definitely just use a boolean attribute to determine whether or not the user is active or archived.
I don't have enough context here as to why find_or_create_by isn't working, but if it were the case, then I would keep it as simple as possible. Don't use all the attributes, but just the consistent ones (like id) that you know will return the proper result.
if ArchivedUser.create! # ... is problematic. The bang after create (i.e. create!) will throw an error if the record could not be created, making the if pointless. So, either use if if you don't want errors thrown and want to handle the condition in which the record was not created. Or use create! without if if you do want to throw an error.
Let's set the stage, PHP & MYSQL:
I have a table we'll call Directions to hold each step for a given task. Each task has variable steps.
The only real essential fields for this question are (primary) step_id,**task_id**, step.
When the author updates their directions, they can update each step, add new ones, remove old ones.
I understand how to handle the updating / insertion / deleting logic structure
INSERT ... ON DUPLICATE KEY ... and so on.
My concern lies elsewhere. Say someone writes directions for task #1, five months later they update task #1 with a few new steps. In that time, there are 1000 new tasks.
Is it really an issue that the majority of steps for task #1 will be located in say... step_id 1-10, and that new step will be way down in 10001?
Since I run no specific computations on the steps, is this a situation where I'm better off storing each step as a serialized array in a single row?
I believe task_id is quite essential field for this question too. If it is indexed you won't have any performance issues with selecting steps for a given task no matter how separated they are in your table.
With steps serialized you would have more issues as you would have to deserialize them for every update/delete, serialize again and then update the row, also you could run into problems with column size unless you limit number of steps for every task.
I have simple problem. In my table have columns ID, NAME, CONTENT, TIMESTAMP. If i use session.query(table).all() result is array of table class objects. So i have no problem with modify one or more objects and update, or use this objects in associations. But i need only columns ID and NAME. If use session.query(table.id, table.name) i get as result tuple with id and name. Using this tuple in update it's - maybe - easy, but code it's.. ugly. As same in associations. And there's is unnecessary database load if i want only ID to make association.
Until now i use simple code to get array of table class objects querying only specified columns:
for row in session.query(table.id, table.name).all():
data.append(table(id=row.id, name=row.name))
It's make updates easy, but using in associations is complex. For example, i get 10 rows in array of table objects from my code. I want association 1 row to table2.
If i use:
session.add(table2(name = 'test', association = data[1])) and commit,
sqlalchemy want to create new table2 row, new association table row and new table row (this first, associated to table2).
It's there any way to get result as array of table objects, not tuples, with specified columns, wheres returning objects is same if make query(table)?
Sorry for my English anyway, i never used it to describe complex problem, until now.
If you want to reduce the number of columns loaded from db with .query(Table), thus reducing the overhead, you might want to try defering the unneeded columns when defining the relationship. Say id and name are essential for your updates and info is not, you can tell the mapper to defer info:
mapper(Class, table, properties={
"info":column_property(table.c.info, deferred=True),
... other properties ....
})
So, when you do session.query(Class), info won't get picked on select. It will be called only if somewhere in the code you have obj.info, that is, when you explicitly call for this attribute.
The Setup:
I have a large form with many fields that are collected to update a Product object. So in the ASPX page the user changes the fields that need updating and they hit submit. In the code behind I do something like this;
Dim p as New MyCompany.Product()
p = p.GetProductById(ProductID)
I extend the Product partial class of Linq to SQL to add this method (GetProductById) to the object
p.Name = txtName.Text
p.SKU = txtSKU.Text
p.Price = txtPrice.Text
...
p.Update()
This is an Update method in the extended Product partial class. I update the database, send emails and update history tables so i want this method to do all those things.
There are 50 more fields for the project so obviously it would be ridiculous to have a method that collects all 50 fields (and I don't want to go that route anyway bc it's harder to debug IMO)
The Problem:
If I get the Product via Linq to SQL using a DataContext then I can never update it again because it errors about not being able to attach and entity that's already attached to another DataContext.
The Question:
SO if I get an object through a method in my BLL, update it in the ASPX page and then try to send the updates through the BLL again to update the database, how should I go about doing this?
Regardless of LINQ-to-SQL or not, here's what I do. Upon submission, I search for the item (it should be quick if it is a single item using the PK), my DAL returns a data object, and I use reflection to map each element in the page with corresponding properties in the data object. My DAL only updates items that changed.
I think what you have to do is the same, gathering all values and submitting them. If LinqToSql is not smart enough to determine what changed then it may not be the best alternative.