I am creating an Ext.data.Session in a view panel. Have models and association set.
how can i discard the session data. I did not find anything in the docs regarding this.
You should be able to just destroy the Session object alltogether.
https://docs.sencha.com/extjs/6.2.0/modern/Ext.data.Session.html#method-destroy
The session (unlike the contained stores and associations) will not automatically persist its contained items and these will thus not be affected by the destroy.
If you keep references to the contained objects elsewhere (like in your view), these should survive the destroy operation.
Related
Within a plotly-dash application, I am entering some user-specified data into a mongoDB database.
The Issue:
The first entry of the information is successful, however, any consecutive entries are not and a pymongo.errors.DuplicateKeyError is raised.
I am speculating that since mongodDB ObjectID() generation is done client-side, there is no refresh occurring since all aspects of this code for insert are done within a app.callback decorator within dash and are likely executed within a thread or separate process.
Shutting down the app and re-starting allows for the insertion of a new record.
The question:
Is there a way to manually "refresh" the ObjectID generated within pymonngo? I would likely want to do this after an exception haldeing of DuplicateKeyError.
For anyone out there with this problem:
Simply have a new dict, placing dict['_id']= ObjectID() prior to insert, do not let mongodb handle it
I am trying to make a card game using Socket.IO, and I am having problems assigning user-specific data (in my case, the cards that each user has).
I'm familiar with JavaScript, but I'm just not sure about whether or not there is a specific feature in Socket.IO for assigning user-specific data, or whether or not I have to store the information in a database / array of sorts.
There are ways to attach data to each socket in socket.io, but it's probably easier to put your data in an associative array, where the keys are the socket id's. Just create the key-value pair upon connection, and make sure you delete the pair on the disconnect event with the delete statement.
I'm having a confusion about the session object in SQLAlchemy. Is it like the PHP session where a session is all the transactions of a users or is a session an entity which scopes the lifetime of a transaction.
For every transaction in SQLAlchemy, is the procedure as follows:
-create and open session
-perform transaction
-commit or rollback
-close session
So, my question is, for a client, do we create a single session object, or a session object is created whenever we have a transaction to perform
I would be hesitant to compare a SQLAlchemy session with a PHP session, since typically a PHP session refers to cookies, whereas SQLAlchemy has nothing to do with cookies or HTTP at all.
As explained by the documentation:
A Session is typically constructed at the beginning of a logical
operation where database access is potentially anticipated.
The Session, whenever it is used to talk to the database, begins a
database transaction as soon as it starts communicating. Assuming the
autocommit flag is left at its recommended default of False, this
transaction remains in progress until the Session is rolled back,
committed, or closed. The Session will begin a new transaction if it
is used again, subsequent to the previous transaction ending; from
this it follows that the Session is capable of having a lifespan
across many transactions, though only one at a time. We refer to these
two concepts as transaction scope and session scope.
The implication here is that the SQLAlchemy ORM is encouraging the
developer to establish these two scopes in his or her application,
including not only when the scopes begin and end, but also the expanse
of those scopes, for example should a single Session instance be local
to the execution flow within a function or method, should it be a
global object used by the entire application, or somewhere in between
these two.
As you can see, it is completely up to the developer of the application to determine how to use the session. In a simple desktop application, it might make sense to create a single global session object and just keep using that session object, committing as the user hits "save". In a web application, a "session per request handled" strategy is often used. Sometimes you use both strategies in the same application (a session-per-request for web requests, but a single session with slightly different properties for background tasks).
There is no "one size fits all" solution for when to use a session. The documentation does give hints as to how you might go about determining this.
For performance reasons, our application loads certain SQLAlchemy model instances into memory at startup time. These instances are not expected to change through the life of the application, so this is a reasonable solution.
For the most part, this has been fine, but I have observed isolated incidences where a DetachedInstanceError: Instance <ZZZ at 0x3a23510> is not bound to a Session; attribute refresh operation cannot proceed will appear, causing ongoing problems. This is the same error I would (expectedly) receive when attempting to access a lazy-loaded relationship on a similarly cached object.
The error appears to be caused by access to the .id attribute, which I would expect not to require any kind of DB refresh to access. What really bothers me is that I can not reproduce this exception consistently.
My question is what might cause this exception to occur and what techniques has anybody used for storing SQLAlchemy instances beyond the transaction that brought them to be?
.id would be missing if the object were expired before it were placed into the cache. After a Session commits or rolls back, by default it expires all attributes, which refresh when next accessed.
It's not a given that .id is what it is, as the object may have been deleted from the database or its primary key modified (both conditions would emit an ObjectDeletedError).
Solution is to cache your object before it gets expired, expunge() it from the session before the session expires everything, or disable expire_on_commit.
I have a C++ application that loads lots of data from a database, then executes algorithms on that data (these algorithms are quite CPU- and data-intensive that's way I load all the data before hand), then saves all the data that has been changed back to the database.
The database-part is nicely separate from the rest of the application. In fact, the application does not need to know where the data comes from. The application could even be started on file (in this case a separate file-module loads the files into the application and at the end saves all data back to the files).
Now:
the database layer only wants to save the changed instances back to the database (not the full data), therefore it needs to know what has been changed by the application.
on the other hand, the application doesn't need to know where the data comes from, hence it does not want to feel forced to keep a change-state per instance of its data.
To keep my application and its datastructures as separate as possible from the layer that loads and saves the data (could be database or could be file), I don't want to pollute the application data structures with information about whether instances were changed since startup or not.
But to make the database layer as efficient as possible, it needs a way to determine which data has been changed by the application.
Duplicating all data and comparing the data while saving is not an option since the data could easily fill several GB of memory.
Adding observers to the application data structures is not an option either since performance within the application algorithms is very important (and looping over all observers and calling virtual functions may cause an important performance bottleneck in the algorithms).
Any other solution? Or am I trying to be too 'modular' if I don't want to add logic to my application classes in an intrusive way? Is it better to be pragmatic in these cases?
How do ORM tools solve this problem? Do they also force application classes to keep a kind of change-state, or do they force the classes to have change-observers?
If you can't copy the data and compare, then clearly you need some kind of record somewhere of what has changed. The question, then, is how to update those records.
ORM tools can (if they want) solve the problem by keeping flags in the objects, saying whether the data has been changed or not, and if so what. It sounds as though you're making raw data structures available to the application, rather than objects with neatly encapsulated mutators that could update flags.
So an ORM doesn't normally require applications to track changes in any great detail. The application generally has to say which object(s) to save, but the ORM then works out what needs persisting to the DB in order to do that, and might apply optimizations there.
I guess that means that in your terms, the ORM is adding observers to the data structures in some loose sense. It's not an external observer, it's the object knowing how to mutate itself, but of course there's some overhead to recording what has changed.
One option would be to provide "slow" mutators for your data structures, which update flags, and also "fast" direct access, and a function that marks the object dirty. It would then be the application's choice whether to use the potentially-slower mutators that permit it to ignore the issue, or the potentially-faster mutators which require it to mark the object dirty before it starts (or after it finishes, perhaps, depending what you do about transactions and inconsistent intermediate states).
You would then have two basic situations:
I'm looping over a very large set of objects, conditionally making a single change to a few of them. Use the "slow" mutators, for application simplicity.
I'm making lots of different changes to the same object, and I really care about the performance of the accessors. Use the "fast" mutators, which perhaps directly expose some array in the data. You gain performance in return for knowing more about the persistence model.
There are only two hard problems in Computer Science: cache invalidation and naming things.
Phil Karlton