Adding a subclass of NSManagedObject Item to the UIPasteboard - Cut, Copy, Paste - nsmanagedobject

I'm trying to implement Cut, Copy, Paste in my Application. The Items that I would like to store on the UIPasteboard are a subclasses of NSManagedObject.
I followed this answer and it was great up until I need to copy the relationships. I started with self.entity.attributesByName.allKeys
for (NSString *theKey in self.entity.attributesByName.allKeys) {
id theValue = [self valueForKey:theKey];
if (theValue) {
[screenCDElementDict setObject:theValue forKey:theKey];
}
}
Then Added self.entity.relationshipsByName.allKeys
for (NSString *theKey in self.entity.relationshipsByName.allKeys) {
id theValue = [self valueForKey:theKey];
if (theValue) {
[screenCDElementDict setObject:theValue forKey:theKey];
}
Then ended up just using self.entity.propertiesByName.allKeys
for (NSString *theKey in self.entity.propertiesByName.allKeys) {
id theValue = [self valueForKey:theKey];
if (theValue) {
[screenCDElementDict setObject:theValue forKey:theKey];
}
}
When I archive screenCDElementDict with the statement
[archiver encodeObject:screenCDElementDict forKey: #"TheObject"];
I get an error saying:
-[MyNSManagesObjectRelation encodeWithCoder:]: unrecognized selector sent to instance 0x72db3d0
So it looks like it is trying to make a copy of the relarionship entity, not the relationship pointer to the entity.
I don't want to copy the actual relationship entity itself, just the pointer to it.
So when I paste the new Item, I create a new NSManagesObject and can then relate to the referenced entities from the original NSManagedObject to the new NSManagedObject.
Seems like the whole reason for copying out all of the attributes and relations manually was because encodeWithCoder and managed Objects do not play nicely.
Even the answer that mentions having to call [super initWithEntity:insertIntoManagedObjectContext:] does not mention anything about copying a pointer to the relationships for the entity.
I could create my archive for each relationship entity NSManagedObject too, though when I create the new Master, I would not be referring the existing relationship entity, but would be creating a new one...
Thanks,
Scott<-

Related

What is the difference between Set,Map,WeakSet,WeakMap in ES6? [duplicate]

There is already some questions about map and weak maps, like this: What's the difference between ES6 Map and WeakMap? but I would like to ask in which situation should I favor the use of these data structures? Or what should I take in consideration when I favor one over the others?
Examples of the data structures from:https://github.com/lukehoban/es6features
// Sets
var s = new Set();
s.add("hello").add("goodbye").add("hello");
s.size === 2;
s.has("hello") === true;
// Maps
var m = new Map();
m.set("hello", 42);
m.set(s, 34);
m.get(s) == 34;
// Weak Maps
var wm = new WeakMap();
wm.set(s, { extra: 42 });
wm.size === undefined
// Weak Sets
var ws = new WeakSet();
ws.add({ data: 42 });
// Because the added object has no other references, it will not be held in the set
Bonus. Which of the above data structures will produce the same/similar result of doing: let hash = object.create(null); hash[index] = something;
This is covered in ยง23.3 of the specification:
If an object that is being used as the key of a WeakMap key/value pair is only reachable by following a chain of references that start within that WeakMap, then that key/value pair is inaccessible and is automatically removed from the WeakMap.
So the entries in a weak map, if their keys aren't referenced by anything else, will be reclaimed by garbage collection at some point.
In contrast, a Map holds a strong reference to its keys, preventing them from being garbage-collected if the map is the only thing referencing them.
MDN puts it like this:
The key in a WeakMap is held weakly. What this means is that, if there are no other strong references to the key, then the entire entry will be removed from the WeakMap by the garbage collector.
And WeakSet does the same.
...in which situation should I favor the use of this data structures?
Any situation where you don't want the fact you have a map/set using a key to prevent that key from being garbage-collected. Here are some examples:
Having instance-specific information which is truly private to the instance, which looks like this: (Note: This example is from 2015, well before private fields were an option. Here in 2021, I'd use private fields for this.)
let Thing = (() => {
var privateData = new WeakMap();
class Thing {
constructor() {
privateData[this] = {
foo: "some value"
};
}
doSomething() {
console.log(privateData[this].foo);
}
}
return Thing;
})();
There's no way for code outside that scoping function to access the data in privateData. That data is keyed by the instance itself. You wouldn't do that without a WeakMap because it would be a memory leak, your Thing instances would never be cleaned up. But WeakMap only holds weak references, and so if your code using a Thing instance is done with it and releases its reference to the instance, the WeakMap doesn't prevent the instance from being garbage-collected; instead, the entry keyed by the instance is removed from the map.
Holding information for objects you don't control. Suppose you get an object from some API and you need to remember some additional information about that object. You could add properties to the object itself (if it's not sealed), but adding properties to objets outside of your control is just asking for trouble. Instead, you can use a WeakMap keyed by the object to store your extra information.
One use case for WeakSet is tracking or branding: Suppose that before "using" an object, you need to know whether that object has ever been "used" in the past, but without storing that as a flag on the object (perhaps because if it's a flag on the object, other code can see it [though you could use a private field to prevent that]; or because it's not your object [so private fields wouldn't help]). For instance, this might be some kind of single-use access token. A WeakSet is a simple way to do that without forcing the object to stay in memory.
Which of the above data structures will produce the same/similar result of doing: let hash = Object.create(null); hash[index] = something;
That would be nearest to Map, because the string index (the property name) will be held by a strong reference in the object (it and its associated property will not be reclaimed if nothing else references it).

How to force the RESTkit mapping to always update response NSManagedObject without a unique ID in JSON body

I do not have any specific id available in the response JSON body (I can not change the body). That is why I can not use
RKEntityMapping *mapping = [RKEntityMapping mappingForEntityForName:....
mapping.identificationAttributes = #[#"specificId"];
Is it possible to configure the mapping in such a way that there is no new NSManagedObject created but always the previous one is updated, if such object exists?
I would like to fetch data for ui update from a single response object (of specific class). Yes I can delete the previous instance of the response before the new one is received but the approach required in this question is cleaner and I do not need to keep the reference/id to the response entity.
I am reading the documentation for RKManagedObjectRequestOperation but it is not clear whether this approach is supported by Restkit.
Thank you for comment.
I have made a hack that is not acceptable but it works: I am using a special attribute in each special "singleton" NSManagedObject's subclass: e.g. unique which I use for the identification on the class level:
In RKManagedObjectMappingOperationDataSource there is the condition modified to allow passing entities with the special unique attribute:
// If we have found the entity identification attributes, try to find an existing instance to update
if ([entityIdentifierAttributes count] || [self.managedObjectCache isUniqueEntityClass:entity])...
In RKFetchRequestManagedObjectCache and in RKInMemoryManagedObjectCache there is the new method defined:
- (BOOL) isUniqueEntityClass:(NSEntityDescription*)entity {
__block BOOL isUniqueEntityClass = NO;
[[[entity attributesByName] allKeys] enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
isUniqueEntityClass = [obj isEqualToString:#"unique"];
if(isUniqueEntityClass) {
*stop = YES;
return;
}
}];
return isUniqueEntityClass;
}
In the method
- (NSSet *)managedObjectsWithEntity:(NSEntityDescription *)entity
attributeValues:(NSDictionary *)attributeValues
inManagedObjectContext:(NSManagedObjectContext *)managedObjectContext...
isUniqueEntityClass decides if we should use predicate based on attributeValues to fetch the entity or directly fetch the entity without the predicate.

Persisting Maps and Lists of properties as JSON in Grails

EDIT: onload() method changed to afterLoad(): Otherwise objects might not be passed properly to the map.
I am currently using some domain classes with a lot of dynamic, complex properties, that I need to persist and update regularly.
I keep these in a Map structure for each class since this makes it easy for referencing in my controllers etc.
However, since Grails does not seem to be able to persist complex property types like List and Map in the DB I am using the following approach to achieve this via JSON String objects:
class ClassWithComplexProperties {
Map complexMapStructure //not persisted
String complexMapStructureAsJSON //updated and synched with map via onload,beforeInsert,beforeUpdate
static transients = ['complexMapStructure']
def afterLoad() { //was previously (wrong!): def onLoad() {
complexMapStructure=JSON.parse(complexMapStructureAsJSON)
}
def beforeInsert() {
complexMapStructureAsJSON= complexMapStructure as JSON
}
def beforeUpdate() {
complexMapStructureAsJSON= complexMapStructure as JSON
}
static constraints = {
complexMapStructureAsJSON( maxSize:20000)
}
}
This works well as long I am only loading data from the DB, but I run into trouble when I want to save back my changes to the DB. E.g. when I do the following
/* 1. Load the json String, e.g. complexMapStructureAsJSON="""{
data1:[[1,2],[3,4]],//A complex structure of nested integer lists
data1:[[5,6]] //Another one
}""" :
*/
ClassWithComplexProperties c=ClassWithComplexProperties.get(1)
// 2. Change a value deep in the map:
c.complexMapStructure.data1[0][0]=7
// 3. Try to save:
c.save(flush:true)
This will usually not work, since, I guess(?), GORM will ignore the save() request due to the fact that the map itself is transient, and no changes are found in the persisted properties.
I can make it work as intended if I hack step 3 above and change it to:
// 3.Alternative save:
complexMapStructureAsJSON="" //creating a change in persisted property (which will be overwritten anyway by the beforeUpdate closure)
c.save(flush:true)
To me this is not a very elegant handling of my problem.
The questions:
Is there a simpler approach to persist my complex, dynamic map data?
If I need to do it the way I currently do, is there a way to avoid the hack in step 3 ?
For option 2, you can use the beforeValidate event instead of beforeInsert and beforeUpdate events to ensure that the change propagates correctly.
class ClassWithComplexProperties {
Map complexMapStructure //not persisted
String complexMapStructureAsJSON //updated and synched with map via onload,beforeInsert,beforeUpdate
static transients = ['complexMapStructure']
def onLoad() {
complexMapStructure=JSON.parse(complexMapStructureAsJSON)
}
// >>>>>>>>>>>>>>
def beforeValidate() {
complexMapStructureAsJSON= complexMapStructure as JSON
}
// >>>>>>>>>>>>>>
static constraints = {
complexMapStructureAsJSON( maxSize:20000)
}
}
I of course do not know much about the application you are building, but it won't hurt to look up alternate data storage models particularly NOSQL databases. Grails has got some support for them too.
Is there a simpler approach to persist my complex, dynamic map data?
Grails can persist List and Map out of the box, you don't need to write complex conversion code and abuse Json.
Example for Map:
class ClassWithComplexProperties {
Map<String, String> properties
}
def props = new ClassWithComplexProperties()
props.properties = ["foo" : "bar"]
props.save()
Example for List:
class ClassWithComplexProperties {
List<String> properties
static hasMany = [properties: String]
}
def props = new ClassWithComplexProperties()
props.properties = ["foo", "bar"]
props.save()
I think this is much easier and cleaner way how to deal with it.
In response to
Is there a simpler approach to persist my complex, dynamic map data?
Grails can persist Sets, Lists and Maps to the database. That may be a simpler approach than dealing with JSON conversions. To have the map persisted to the database you need to include it in the hasMany property.
Map complexMapStructure
static hasMany = [complexMapStructure: dynamicComplexPropertyObject]
The documentation suggests that using a Bag may be more efficient.

Hard to update an Entity created by another LINQ to SQL context

Why this keep bugging me all day.
I have an entity with several references where i get from a context which I then Dispose.
Do some Changes and try to SubmitChanges(). While calling SubmitChanges() without .Attach() seems to simply do nothing. When using .Attach() I get the Exception :
An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported.
Any ideas?
L2S is very picky about updating an entity that came from a different DB context. In fact, you cannot do it unless you 'detach' it first from the context it came from. There are a couple different ways of detaching an entity. One of them is shown below. This code would be in your entity class.
public virtual void Detach()
{
PropertyChanging = null;
PropertyChanged = null;
}
In addition to this, you can also serialize your entity using WCF based serialization. Something like this:
object ICloneable.Clone()
{
var serializer = new DataContractSerializer(GetType());
using (var ms = new System.IO.MemoryStream())
{
serializer.WriteObject(ms, this);
ms.Position = 0;
return serializer.ReadObject(ms);
}
}

Linq to SQL and concurrency with Rob Conery repository pattern

I have implemented a DAL using Rob Conery's spin on the repository pattern (from the MVC Storefront project) where I map database objects to domain objects using Linq and use Linq to SQL to actually get the data.
This is all working wonderfully giving me the full control over the shape of my domain objects that I want, but I have hit a problem with concurrency that I thought I'd ask about here. I have concurrency working but the solution feels like it might be wrong (just one of those gitchy feelings).
The basic pattern is:
private MyDataContext _datacontext
private Table _tasks;
public Repository(MyDataContext datacontext)
{
_dataContext = datacontext;
}
public void GetTasks()
{
_tasks = from t in _dataContext.Tasks;
return from t in _tasks
select new Domain.Task
{
Name = t.Name,
Id = t.TaskId,
Description = t.Description
};
}
public void SaveTask(Domain.Task task)
{
Task dbTask = null;
// Logic for new tasks omitted...
dbTask = (from t in _tasks
where t.TaskId == task.Id
select t).SingleOrDefault();
dbTask.Description = task.Description,
dbTask.Name = task.Name,
_dataContext.SubmitChanges();
}
So with that implementation I've lost concurrency tracking because of the mapping to the domain task. I get it back by storing the private Table which is my datacontext list of tasks at the time of getting the original task.
I then update the tasks from this stored Table and save what I've updated
This is working - I get change conflict exceptions raised when there are concurrency violations, just as I want.
However, it just screams to me that I've missed a trick.
Is there a better way of doing this?
I've looked at the .Attach method on the datacontext but that appears to require storing the original version in a similar way to what I'm already doing.
I also know that I could avoid all this by doing away with the domain objects and letting the Linq to SQL generated objects all the way up my stack - but I dislike that just as much as I dislike the way I'm handling concurrency.
I worked through this and found the following solution. It works in all the test cases I (and more importantly, my testers!) can think of.
I am using the .Attach() method on the datacontext, and a TimeStamp column. This works fine for the first time that you save a particular primary key back to the database but I found that the datacontext throws a System.Data.Linq.DuplicateKeyException "Cannot add an entity with a key that is already in use."
The work around for this I created was to add a dictionary that stored the item I attach the first time around and then every subsequent time I save I reuse that item.
Example code is below, I do wonder if I've missed any tricks - concurrency is pretty fundamental so the hoops I'm jumping through seem a little excessive.
Hopefully the below proves useful, or someone can point me towards a better implementation!
private Dictionary<int, Payment> _attachedPayments;
public void SavePayments(IList<Domain.Payment> payments)
{
Dictionary<Payment, Domain.Payment> savedPayments =
new Dictionary<Payment, Domain.Payment>();
// Items with a zero id are new
foreach (Domain.Payment p in payments.Where(p => p.PaymentId != 0))
{
// The list of attached payments that works around the linq datacontext
// duplicatekey exception
if (_attachedPayments.ContainsKey(p.PaymentId)) // Already attached
{
Payment dbPayment = _attachedPayments[p.PaymentId];
// Just a method that maps domain to datacontext types
MapDomainPaymentToDBPayment(p, dbPayment, false);
savedPayments.Add(dbPayment, p);
}
else // Attach this payment to the datacontext
{
Payment dbPayment = new Payment();
MapDomainPaymentToDBPayment(p, dbPayment, true);
_dataContext.Payments.Attach(dbPayment, true);
savedPayments.Add(dbPayment, p);
}
}
// There is some code snipped but this is just brand new payments
foreach (var payment in newPayments)
{
Domain.Payment payment1 = payment;
Payment newPayment = new Payment();
MapDomainPaymentToDBPayment(payment1, newPayment, false);
_dataContext.Payments.InsertOnSubmit(newPayment);
savedPayments.Add(newPayment, payment);
}
try
{
_dataContext.SubmitChanges();
// Grab the Timestamp into the domain object
foreach (Payment p in savedPayments.Keys)
{
savedPayments[p].PaymentId = p.PaymentId;
savedPayments[p].Timestamp = p.Timestamp;
_attachedPayments[savedPayments[p].PaymentId] = p;
}
}
catch (ChangeConflictException ex)
{
foreach (ObjectChangeConflict occ in _dataContext.ChangeConflicts)
{
Payment entityInConflict = (Payment) occ.Object;
// Use the datacontext refresh so that I can display the new values
_dataContext.Refresh(RefreshMode.OverwriteCurrentValues, entityInConflict);
_attachedPayments[entityInConflict.PaymentId] = entityInConflict;
}
throw;
}
}
I would look at trying to utilise the .Attach method by passing the 'original' and 'updated' objects thus achieving true optimistic concurrency checking from LINQ2SQL. This IMO would be preferred to using version or datetime stamps either in the DBML objects or your Domain objects. I'm not sure how MVC allows for this idea of persisting the 'original' data however.. i've been trying to investigate the validation scaffolding in the hope that it's storing the 'original' data.. but i suspect that it is as only as good as the most recent post (and/or failed validation). So that idea may not work.
Another crazy idea i had was this: override the GetHashCode() for all of your domain objects where the hash represents the unique set of data for that object (minus the ID of course). Then, either manually or with a helper bury that hash in a hidden field in the HTML POST form and send it back to your service layer with your updated domain object - do the concurrency checking in your service layer or data layer (by comparing the original hash with a newly extracted domain object's hash) but be aware that you need to be checking for and raising concurrency exceptions yourself. It's nice to use the DMBL functions but the idea of abstracting away the data layer is so to not depend on the particular implementation's features etc. So having full control of the optimistic concurrency checking on your domain objects in your service layer (for example) seems like a good approach to me.