Can I insert deserialized JSON SObjects from another Salesforce org into my org? - json

We have the need to clone a complex data structure from one org to another. This contains a series of custom SObjects, including parents and children.
The flow would be the following. On origin org, we just JSON.serialize the list of SObjects we want to send. Then, on target org, we can JSON.deserialize that list of objects. So far so good.
The problem is that we cannot insert those SObjects directly, since they contain the origin org's IDs and Salesforce won't let us insert objects that already have Ids.
The solution we found is to manually insert the object hierarchy, maintaining a map of originId > targetId and fixing the relationships manually. However, we wonder if Salesforce provides an easier way to do such a thing, or someone knows a better way to do it.
Is there an embedded way in Salesforce to do this? Or are we stuck into a tedious manual process?

List.deepClone() call with preserveIds = false might deal with one problem, then:
Consider using upsert operation to build the relationships for you.
Upsert not only can prevent duplicates but also maintain hierarchies.
You'll need an external Id field on the parent, not on the children though.
/* Prerequisites to run this example succesfully:
- having a field Account_Number__c that will be marked as ext. id (you can't mark the standard one sadly)
- having an account in the DB with such value (but the point of the example is to NOT query for it's Id)
*/
Account parent = new Account(Account_Number__c = 'A364325');
Contact c = new Contact(LastName = 'Test', Account = parent);
upsert c;
System.debug(c);
System.debug([SELECT AccountId, Account.Account_Number__c FROM Contact WHERE Id = :c.Id]);
If you're not sure whether it will work for you - play with Data Loader's upsert function, might help to understand.
If you have more than 2 level hierarchy on the same sObject type I think you'd still have to upsert them in correct order though (or use Database.upsert version and keep on rerunning it for failed ones).

Related

SSIS consolidate and concatenate multiple rows into single rows without using SQL

I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.

SQLAlchemy override reflected columns dynamically

I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!

Confusion with Entity Framework context

I'm a bit confused in regards to how EF's dbContext works.
If I do something like _context.Persons.Add(_person) (assuming person is a valid entity), if I then (before calling _context.SaveChanges()) query Persons, will the person I just added be included in the results?
For example:
Person _person = new Person() {Firstname = "Bill", Lastname = "Snerdly"};
_context.Persons.Add(_person);
var _personList = _context.Persons.Where(p => p.Lastname.StartsWith("Sne"));
Whenever I try this, it seems as though the context loses track of the fact that I've added this new person to the context.
What confuses me is that if I edit an existing person and attach the person and set the state to modified, querying the context seems to keep track of the changes that were made and returns them in the results. For example:
//Assuming that Person 5 exists with the name William Snerdly
Person _person = new Person() {Id = 5, Firstname = "Bill", Lastname = "Snerdly"};
_context.Persons.Attach(_person);
_context.Entry(_person).State = System.Data.EntityState.Modified;
var _personList = _context.Persons.Where(p => p.Lastname.StartsWith("Sne"));
In this case, it seems like the person with the id of 5 will show up in the list with the name Bill instead of William. IOW, the context queried the data but retained the changes while in the first scenario, the context queried the data but ignored any added items. It just seems a bit inconsistant.
Am I understanding this correctly or am I missing something?
Thanks for your help with this.
No, as it does not yet exist in the database. It will, however, be accessible through the ObjectStateManager of the ObjectContext, or alternatively, if you're using the DbContext/DbSet wrappers, through the .Local property of the DbSet.
In the case of the edit, you're seeing the ORM's first level cache at work. The query is executed against the database (and so compares against the values in there - your example would get even weirder if you modified the Lastname in the context, but still get the result from the query looking for the unmodified Lastname), but when its results are processed, first the ID of the returned entity is checked, and since the entity with that ID is already present in the context, you get that instance back. This is the default "AppendOnly" mode of operation.
I don't know what you want to do, but I had to understand all that when I wanted to validate my changes according to rules that needed to use the values of both loaded and unread entities. I ended up starting a transaction, saving the changes with the "None" options, doing my validation queries againt the database (which then contained the "merged" view of the data), and the rolling back the transaction if the data was invalid, or accepting the changes and committing the transaction otherwise.

"Diffing" objects from a relational database

Our win32 application assembles objects from the data in a number of tables in a MySQL relational database. Of such an object, multiple revisions are stored in the database.
When storing multiple revisions of something, sooner or later you'll ask yourself the question if you can visualize the differences between two revisions :) So my question is: what would be a good way to "diff" two such database objects?
Would you do the comparison at the database level? (Doesn't sound like a good idea: too low-level, and too sensitive to the schema).
Would you compare the objects?
Would you write a function that "manually" compares the properties and fields of two objects?
How would you store the diff? In a separate, generic "TDiff" object?
Any general recommendations on how to visualize such things in a user interface?
Advice, or stories about your own experiences with this, are very welcome; thanks a bunch!
Extra info on use case (20090515)
In reply to Antony's comment: this specific application is used to schedule training courses, run by teams of teachers. The schedule of a teacher is stored in various tables in the database, and contains info such as "where does she have to go on which day", "who are her colleagues in the team", etc. This information is spread out over multiple tables.
Once in a while, we "publish" the schedule, so the teachers can see it on a webpage. Each "publication" is a revision, and we'd like to be able to show the users (and later also the teachers) what's changed between two publications --- if anything.
Hope that makes the scenario a bit more tangible :)
Some final remarks
Well, the bounty has come to an end, so I've accepted an answer. If it'd somehow be possible to slice a couple of extra 100's off of my rep and give it to some of the other answers, I would do so without hesitation. All your guys' help has been great, and I am very grateful! ~ Onno 20090519
Just an idea, but would it be worthwhile for you to convert the two object versions being compared to some text format and then comparing these text objects using an existing diff program - like diff for example? There are lots of nice diff programs out there that can offer nice visual representations, etc.
So for example
Text version of Object 1:
first_name: Harry
last_name: Lime
address: Wien
version: 0.1
Text version of Object 2:
first_name: Harry
last_name: Lime
address: Vienna
version: 0.2
The diff would be something like:
3,4c3,4
< address: Wien
< version: 0.1
---
> address: Vienna
> version: 0.2
Assume that a class has 5 known properties - date, time, subject, outline, location. When I look at my schedule, I'm most interested in the most recent (ie current/accurate) version of these properties. It would also be useful for me to know what, if anything, has changed. (As a side note, if the date, time or location changed, I'd also expect to get an email/sms advising me in case I don't check for an updated schedule :-))
I would suggest that the 'diff' is performed at the time the schedule is amended. So, when version 2 of the class is created, record which values have changed, and store this in two 'changelog' fields on the version 2 object (there must already be one parent table that sits atop all your tables - use that one!). One changelog field is 'human readable text' eg 'Date changed from Mon 1 May to Tues 2 May, Time changed from 10:00am to 10:30am'. The second changelog field is a delimted list of changed fields eg 'date,time' To do this, before saving you would loop over the values submitted by the user, compare to current database values, and concatenate 2 strings, one human readable, one a list of field names. Then, update the data and set your concatenated strings as the 'changelog' values.
When displaying the schedule load the current version by default. Loop through the fields in the changelog field list, and annotate the display to show that the value has changed (a * or a highlight, etc). Then, in a separate panel display the human readable change log.
If a schedule is amended more than once, you would probably want to combine the changelogs between version 1 & 2, and 2 & 3. Say in version 3 only the course outline changed - if that was the only changelog you had when displaying the schedule, the change to date and time wouldn't be displayed.
Note that this denormalised approach won't be great for analysis - eg working out which specific location always has classes changed out of it - but you could extend it using an E-A-V model to store the change log.
Doing a comparison at the database level would be good if what you cared about was changes to the database. That makes the most sense if you're trying to design a layer of generic functionality on top of the database itself.
Doing a comparison at the object level would be good if you care about changes to the data. For example, if the data was the input to a program and you were interested in looking at changes in the input to verify that changes to the output were correct.
Your use case doesn't appear to be either of these. You appear to care about the output and want differences from that perspective. If that's the case, I would do differences on the output report (or a pure-text version of it) instead of on the underlying data. You can do that with any off-the-shelf diff tool. To make things easier for your end-users you could parse the diff results and render them as HTML. There are lots of options here: side-by-side with color coding to indicate changes, one document with markup for changes (e.g. red strikethrough for deletions and green for additions), maybe just highlight areas that have changed and use balloons to show the previous/current values on demand.
I've thought about doing database comparisons but never tried to implement it. As you noted, any such attempts are intimately intertwined with the schema.
I have done object-level comparisons. The general algorithm was this:
Do a set comparison on the lists of object IDs. This creates three result groupings: added objects, deleted objects, and objects that live in both sets.
Report the deletions.
Report the additions.
For the things in both sets, do an attribute-by-attribute comparison.
If any differences are found, report the object ID, the attributes that differ, and the respective values. If appropriate, highlight the portion of the attribute value that has changed.
In my case, the comparison algorithms were hand-written to match the object attributes. This gave me control over which attributes were compared and how. A generic comparator might be possible for some cases but would depend on the situation and at least partially on the implementation language.
I've looked into MysQL Diffing a number of times. Unfortunately, there aren't any really good solutions available.
One tool I've tried was mysqldiff (www.mysqldiff.org). mysqldiff is a tool written in PHP which is capable of diffing mysql schemas. Unfortunately, it doesn't do a great job a lot of the time.
MySQL Workbench, MySQLs own SQL IDE provides the option to generate an alter script and I would imagine it does this by performing some kind of diff operation internally.
Aqua Data Studio is another tool that is capable of comparing schemas and outputing a diff of the two. While the ADS diff is quite nice, it does not provide a tool to create an alter script.
If I were writing my own I guess I would write code capable of comparing structure of two tables. Such code could be tuned to be highly sensitive (Ig if column order differs from from version to the next, it's a difference) or more moderately sensitive (Eg Column order is not a major issue, datatypes and lengths are important, as are indices and constraints).
Storage, I'm not to sure. I would look into how a version control system such as Mercurial stores its diff information for revisions and use that to elaborate a method appropriate for the DB.
Finally, for visual output I recommend you take a look at the Aqua Data Stduio compare feature (You can use the Trial version to test this...). Its diff output is pretty good.
My application dbscript compares hierarchical data (database schemas) in a stored procedure, which of course has to compare each field/property of every object with its counterpart. I guess you won't get around that step (unless you have a generic object description model)
As for the UI part of your question, have a look at screenshots to view and select differences.
I would think about some sort of common text representation of the objects and let the texts compare with an existing diffing tool like WinMerge.
I see no need to invent diffing by myself since there are already plenty of nice tools I can use.
In your situation in PostgreSQL I used a difference tables with the schema:
history_columns (
column_id smallint primary key,
column_name text not null,
table_name text not null,
unique (table_name, column_name)
);
create temporary sequence column_id_seq;
insert into history_columns
select nextval('column_id_seq'), column_name, table_name
from information_schema.columns
where
table_name in ('table1','table2','table3')
and table_schema=current_schema() and table_catalog=current_database();
create table history (
column_id smallint not null references history_columns,
id int not null,
change_time timestamp with time zone not null
constraint change_time_full_second -- only one change allowed per second
check (date_trunc('second',change_time)=change_time),
primary key (column_id,id,change_time),
value text
);
And on the tables I used a trigger like this:
create or replace function save_history() returns trigger as
$$
if (tg_op = 'DELETE') then
insert into historia values (
find_column_id('id',tg_relname), OLD.id,
date_trunc('second',current_timestamp),
OLD.id );
[for each column_name] {
if (char_length(OLD.column_name)>0) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
)
}
elsif (tg_op = 'UPDATE') then
[for each column_name] {
if (OLD.column_name is distinct from NEW.column_name) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
);
end if;
}
end if;
$$ language plpgsql volatile;
create trigger save_history_table1
before update or delete on table1
for each row execute procedure save_history();
This isn't really an answer to the question you asked rather an attempt to re-imagine the problem. Would you consider altering your database and object model to store the aggregate root and a series of deltas? That is, model and store RevisionSets that are collections of Revisions; a Revision is an entity property paired with a value. In a sense this is internalizing the revision structure into your architecture that the other posters are suggesting that you bolt-on to what you already have via "logs".
It's trivial to display the aggregate from the deltas, and even easier to display the deltas as a change history. The fact that you are using a rich client with state and local memory makes this even more compelling. You could very easily display "all the changes since date xxxx" without revisiting the database.
Credit for the basic idea goes to Greg Young and his work with financial data streams, but it is imminently applicable to your problem.
I'm riffing off of what Harry Lime suggested: Output your properties to text format, then hash the results. That way you can compare the hash values and easily flag the data that has been altered. This way you get the best of both worlds as you can visually see differences but programmatically identify differences. With the has you'll have a good source for an index should you want to store and retrieve the deltas.
Given you want to create a UI for this and need to indicate where the differences are, it seems to me you can either go custom or create a generic object comparer - the latter being dependent on the language you are using.
For the custom method, you need to create a class that takes to two instances of the classes to be comparied. It then returns differences;
public class Person
{
public string name;
}
public class PersonComparer
{
public PersonComparer(Person old, Person new)
{
....
}
public bool NameIsDifferent() { return old.Name != new.Name; }
public string NameDifferentText() { return NameIsDifferent() ? "Name changed from " + old.Name + " to " + new.Name : ""; }
}
This way you can use the NameComparer object to create your GUI.
The gereric approach would be much the same, just that you generalize the calls, and use object insepection (getObjectProperty call below) to find differences;
public class ObjectComparer()
{
public ObjectComparer(object old, object new)
{
...
}
public bool PropertyIsDifferent(string propertyName) { return getObjectProperty(old, propertyName) != getObjectProperty(new, propertyName) };
public string PropertyDifferentText(string propertyName) { return PropertyIsDifferent(propertyName) ? propertyName + " " + changed from " + getObjectProperty(old, propertyName) + " to " + getObjectProperty(new, propertyName): ""; }
}
}
I would go for the second, as it makes things really easy to change GUI on needs. The GUI I would try 'yellowing' the differences to make them easy to see - but that depends on how you want to show the differences.
Getting the object to compare would be loading your object with the initial revision and latest revision.
My 2 cents... Not as techy as the database compare stuff already here.
Have you looked at Open Source DiffKit?
www.diffkit.org
I think it does what you want.
Example with Oracle.
Export ordered objects to text with dbms_metadata
Export ordered tables data into CSV or query format
Make big text file
Diff

DLINQ- Entities being inserted without .InsertOnSubmit(...)?

I ran into an interesting problem while using DLINQ. When I instantiate an entity, calling .SubmitChanges() on the DataContext will insert a new row into the database - without having ever called .Insert[All]OnSubmit(...).
//Code sample:
Data.NetServices _netServices = new Data.NetServices(_connString);
Data.ProductOption[] test = new Data.ProductOption[]
{
new Data.ProductOption
{
Name="TEST1",
//Notice the assignment here
ProductOptionCategory=_netServices.ProductOptionCategory.First(poc => poc.Name == "laminate")
}
};
_netServices.SubmitChanges();
Running the code above will insert a new row in the database. I noticed this effect while writing an app to parse an XML file and populate some tables. I noticed there were 1000+ inserts when I was only expecting around 50 or so - then I finally isolated this behavior.
How can I prevent these objects from being persisted implicitly?
Thanks,
-Charles
Think of the relationship as having two sides. When you set one side of the relationship the other side needs to be updated so in the case above as well as setting the ProductOptionCategory it is effectively adding the new object to the ProductOptions relationship on the laminate ProductOptionCategory side.
The work-around is as you have already discovered and to set the underlying foreign key instead so LINQ to SQL will not track the objects in the usual way and require implicit indication it should persist the object.
Of course the best solution for performance would be to determine from the source data which objects you don't want to add and never create the instance in the first place.