How can I rollback changes made in a detail table (edit, delete in a gridview) to it's original state, when the user does not click the save button in the master table.
What changes are there to rollback? You should not be making any changes until they press the same button.
Load the activeDataProvider data in an array and put it in the session.
Use the session array to load data in an ArrayDataProvider
show the data using the GridView. Any changes to the gridview should be done on the array (not the db)
when the user saves the master record apply the changes from the array to the DB.
You should also not be making any changes to the DB directly because of concurrent users.
Related
I have a initial_data.json file in my project which is loaded to the database initialy using syncdb.
Now every time when i migrate(south migration) the app, old initial_data gets updated in the table.
I dont want every tables to get updated again, since some of the columns in the table are modified by user. I need to skip those tables,any solutions.
I have a python app that has an admin dashboard.
There I have a button called "Update DB".
(The app uses MySQL and SQLAlchemy)
Once it's clicked it makes an API call and gets a list of data and writes that to the DB, and if there are new records returned by the API call it adds them and does not duplicate currently existing records.
However if API call returns less items, it does not delete them.
Since I don't even have a "starting to google" point I need some guidance of what type of SQL query should my app be making.
Like once button is clicked ,it needs to go through all the rows:
do the changes to the updated records that existed
add new ones if there are any returned by the API call
delete ones that API call did not return.
What is this operation called or how can I accomplish this in mysql?
Once I find out about this I'll see how can I do that in SQLAlchemy.
You may want to set a timestamp column to the time of latest action on the table and have a background thread remove old rows as another action. I don't know of any atomic action that may perform the desired data reformation. Another option might be satisfactory is to write the replacement batch to a staging table, rename both versions (swap) and drop the old table. HTH
It's been a while since I had to do anything with an Access Project so forgive me if this something easy.
I have an Access 2007 database which I have converted into an Access 2007 Project. I've created all the tables & views on a SQL 2008 server, granted all the necessary permissions and started testing.
Where I'm having trouble is, I have a form that is used to update data. The form is fed from a view (all tables in the view have primary keys) and view is schema bound. On the form for some of the fields I have a "Change" event handler which updates a "Last_Worked" field so we can track the last time those fields were changed. (the "Last_Worked" field is "datetime" in the SQL server) The event handlers are basically "Last_Worked = Now()".
The problem is, for those fields where I have a "Change" event handler, I can't put anything into those fields, I start typing and nothing is displayed in the field. When I check the "Last_Worked" field in table it's updating, but the fields I attempted to change from the form aren't changing.
I can update data and insert new records from a dataview so the view that is feeding the form is not read-only. If I remove the event handlers the problem goes away, but I need to log when those fields are updated. I've tried doing "Me!Last_Worked = Now()" and "Me!Last_Worked.Value = Now()" in the event handlers and get the same problem, I'm not able to update/edit those fields.
This worked perfectly in Access and I even created a blank Access database, copied the forms over, created linked tables to the views in the server and it worked. It just doesn't work as an Access Project.
Any help would be appreciated.
When you say event handler, do you mean you have set the control source? You cannot do that if you need the control for data entry. Note also that the change event runs for every small change. You can either use the default value for new records, or run some VBA in an update event of some relevant control. I reckon in this case you need:
Private Sub Form_BeforeUpdate(Cancel As Integer)
Me.LastChanged = Now()
End Sub
Is there anyone that has this problem before?
I removed part of data in a text type field from a table, for example, the data was like 'adcdefghi', after the action of remove, the data becomes like 'abcd'. However, when I retrieve data from that field, the result is still 'adcdefghi'.
I'm sure that I changed the right database. Is there something I have to do before I retrieve the data?????
Maybe you need to do COMMIT after change the value.
The Commit method of the Database object finalizes the persistent form of the database.
a. You must assure that you COMMIT the transaction
b. Make sure that the query browser is not in Start a new transaction while your doing transaction in your System.
I am working on a data warehousing project where several systems are loading data into a staging area for subsequent processing. Each table has a "loadId" column which is a foreign key against the "loads" table, which contains information such as the time of the load, the user account, etc.
Currently, the source system calls a stored procedure to get a new loadId, adds the loadId to each row that will be inserted, and then calls a third sproc to indicate that the load is finished.
My question is, is there any way to avoid having to pass back the loadId to the source system? For example, I was imagining that I could get some sort of connection Id from Sql Server, that I could use to look up the relevant loadId in the loads table. But I am not sure if Sql Server has a variable that is unique to a connection?
Does anyone know?
Thanks,
I assume the source systems are writing/committing the inserts into your source tables, and multiple loads are NOT running at the same time...
If so, have the source load call a stored proc, newLoadStarting(), prior to starting the load proc. This stored proc will update a the load table (creates a new row, records start time)
Put a trigger on your loadID column that will get max(loadID) from this table and insert as the current load id.
For completeness you could add an endLoading() proc which sets an end date and de-activates that particular load.
If you are running multiple loads at the same time in the same tables...stop doing that...it's not very productive.
a local temp table (with one pound sign #temp) is unique to the session, dump the ID in there then select from it
BTW this will only work if you use the same connection
In the end, I went for the following solution "pattern", pretty similar to what Markus was suggesting:
I created a table with a loadId column, default null (plus some other audit info like createdDate and createdByUser);
I created a view on the table that hides the loadId and audit columns, and only shows rows where loadId is null;
The source systems load/view data into the view, not the table;
When they are done, the source system calls a "sp__loadFinished" procedure, which puts the right value in the loadId column and does some other logging (number of rows received, date called, etc). I generate this from a template as it is repetitive.
Because loadId now has a value for all those rows, it is no longer visible to the source system and it can start another load if required.
I also arrange for each source system to have its own schema, which is the only thing it can see and is its default on logon. The view and the sproc are in this schema, but the underlying table is in a "staging" schema containing data across all the sources. I ensure there are no collisions through a naming convention.
Works like a charm, including the one case where a load can only be complete if two tables have been updated.