I make a query (with \yii\db\ActiveQuery) with joins, and some fields in "where" clause become ambigous. Is there a nice and short way to specify the name of the current model`s (ActiveRecord) table (from which one the ActiveQuery was instantiated) before the column name? So I can use this all the time in all cases and to make it short.
Don't like doing smth like this all the time (especially in places where there're no joins, but just to be able to use those methods with joins if it will be needed):
// in the ActiveQuery method initialized from the model with tableName "company"
$this->andWhere(['{{%company}}.`company_id`' => $id]);
To make the "named scopes" to work for some cases with joins..
Also, what does the [[..]] mean in this case, like:
$this->andWhere(['[[company_id]]' => $id]);
Doesn't seem to work like to solve the problem described above.
Thx in advance!
P.S. sorry, don't have enough reputation to create tag yii2-active-query
to get real table name :
Class :
ModelName::getTableSchema()->fullName
Object :
$model::getTableSchema()->fullName
Your problem is a very common one and happens most often with fields liek description, notes and the like.
Solution
Instead of
$this->andWhere(['description'=>$desc]);
you simply write
$this->andWhere(['mytable.description'=>$desc]);
Done! Simply add the table name in front of the field. Both the table name and the field name will be automatically quoted when the raw SQL is created.
Pitfall
The above example solves your problem within query classes. One I struggled over and took me quite some time to solve was a models relations! If you join in other tables during your queries (more than just one) you could also run into this problem because your relation-methods within the model are not qualified.
Example: If you have three tables: student, class, and teacher. Student and teacher probably are in relation with class and both have a FK-field class_id. Now if you go from student via class to teacher ($student->class->teacher). You also get the ambigous-error. The problem here is that you should also qualify your relation definitions within the models!
public function getTeacher()
{
return $this->hasOne(Teacher::className(), ['teacher.id' => 'class.teacher_id']);
}
Proposal
When developing your models and query-classes always fully qualify the fields. You will never ever run into this problem again...that was my experience at least! I actually created my own model-gii-template. So this gets solved automatically now ;)
Hope it helped!
I am having problem with BusinessObject Universum and the way it generates queries and consequently yielding the results.
Here is the background: mechanism that is functioning has already been implemented. I was trying to copy the SAME mechanism just to deliver a different field.
Here is the data model: http://tinypic.com/r/ng524g/8
The mechanism that functions is marked with BLUE color. The mechanism that I tried to implement and that is not functioning is marked with RED color.
On business layer I have defined a dimension with aggregate aware function. This function takes first VWF_Party_Collection_A.Collectionstatus_CD column (at the higher level). If a user selects an attribute from contract level, function takes VWF_Contract_Collection_A.Collectionstatus_CD column.
Problem is when I take all attributes from VWD_Kunde_A table and than add the dimension with the mentioned aggregate aware function (ie Collectionstatus_CD), the constructed query from BO side does not make any sense. Here it is:
SELECT
D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Collectionstatus_CD,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID
FROM
D_ATA_MV_FinanceTreasury.VWD_Party_A
LEFT JOIN D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A
ON D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A
WHERE
(
D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY )
AND
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = 102241978
Please notice the strange conctruction in the 'FROM' part (comma has been added). Another strange and unexpected construction is in 'WHERE' part:
( D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY )
The mechanism that is functioning is joining joins VWD_Kunde_A with VWF_Contract_Collection_A table and yields the correct result.
Now, I have tried to define a dimension without the mentioned aggregate aware function that contains only VWF_Contract_Collection_A.Collectionstatus_CD attribute. When I run the same query BO yields CORRECT results and it generates the CORRECT (expected) query.
This is the query I am expecting:
SELECT
D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Collectionstatus_CD,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID
FROM
D_ATA_MV_FinanceTreasury.VWD_Kunde_A LEFT JOIN D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A ON D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Namespace_TXT AND D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Party_KEY AND D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Legacy_ID
WHERE
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = 102241978
Furthermore, I suspected that it can something to do with contexts. However, I did not find any context for the mechanism that already functions and that I tried to copy. Therefore, I did not implement any context for the mechanisam I am tring to implement.
At this point I am clueless since I tried everything I knew. I would appreciate help.
Thanks!
A.
UPDATE: it seems as aggragate aware function is not functioning... This is how it is defined:
#Aggregate_Aware(D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Collectionstatus_CD,D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Collectionstatus_CD)
(I just copied the code from Kreditklasse and adapted it... That makes me even more confused...)
UPDATE_2: it really seems as if aggragate aware is not functioning in my case because I selected all attributes from contract_context and it still jumps to party context. Very confused because THE SAME mechasism is functioning as expected when I select Kreditklasse...
Check the aggregate navigation.
Setting up Aggregate Awareness requires two steps (in addition to correctly defining the joins between the tables, of course):
Define the objects with the Aggregate_Aware function
Set table-object incompatibilities through Actions > Set Aggregate Navigation.
It sounds like the second part is not properly configured: make sure that any objects which require the second table are marked incompatible with the first.
We have the need to clone a complex data structure from one org to another. This contains a series of custom SObjects, including parents and children.
The flow would be the following. On origin org, we just JSON.serialize the list of SObjects we want to send. Then, on target org, we can JSON.deserialize that list of objects. So far so good.
The problem is that we cannot insert those SObjects directly, since they contain the origin org's IDs and Salesforce won't let us insert objects that already have Ids.
The solution we found is to manually insert the object hierarchy, maintaining a map of originId > targetId and fixing the relationships manually. However, we wonder if Salesforce provides an easier way to do such a thing, or someone knows a better way to do it.
Is there an embedded way in Salesforce to do this? Or are we stuck into a tedious manual process?
List.deepClone() call with preserveIds = false might deal with one problem, then:
Consider using upsert operation to build the relationships for you.
Upsert not only can prevent duplicates but also maintain hierarchies.
You'll need an external Id field on the parent, not on the children though.
/* Prerequisites to run this example succesfully:
- having a field Account_Number__c that will be marked as ext. id (you can't mark the standard one sadly)
- having an account in the DB with such value (but the point of the example is to NOT query for it's Id)
*/
Account parent = new Account(Account_Number__c = 'A364325');
Contact c = new Contact(LastName = 'Test', Account = parent);
upsert c;
System.debug(c);
System.debug([SELECT AccountId, Account.Account_Number__c FROM Contact WHERE Id = :c.Id]);
If you're not sure whether it will work for you - play with Data Loader's upsert function, might help to understand.
If you have more than 2 level hierarchy on the same sObject type I think you'd still have to upsert them in correct order though (or use Database.upsert version and keep on rerunning it for failed ones).
I have three tables objects, (primary key object_ID) flags (primary key flag_ID) and object_flags (cross-tabel between objects and flags with some extra info).
I have a query returning all flags, and a one or zero if a given object has a certain flag:
SELECT
f.*,
of.*,
of.objectID IS NOT NULL AS object_has_flag,
FROM
flags f
LEFT JOIN object_flags of
ON (f.flag_ID = of.flag_ID) AND (of.object_ID = :objectID);
In the application (which is written in Delphi), all rows are loaded in a component. The user can assign flags by clicking check boxes in a table, modifying the data.
Suppose one line is edited. Depending on the value of object_has_flag, the following things have to be done:
If object_has_flag was true and still is true, an UPDATE should be done on the relevant row in objects_flags.
If object_has_flag was false but is now true, and INSERT should be done
If object_has_flag was true, but is now false, the row should be deleted
It seems that this cannot be done in one query https://stackoverflow.com/questions/7927114/conditional-replace-or-delete-in-one-query.
I'm using MyDAC's TMyQuery as a dataset. I have written separate code that executes the necessary queries to save changes to a row, but how do I couple this to the dataset? What event handler should I use, and how do I tell the TMyQuery that it should refresh instead of post?
EDIT: apparently, it is not completely clear what the problem is. The standard UpdateSQL, DeleteSQL and InsertSQL cannot be used because sometimes after editing a line (not deleting it or inserting a line), an INSERT or DELETE has to be done.
The short answer is, to paraphrase your answer here:
Look up the documentation for "Updating Data with MyDAC Dataset Components" (as of MyDAC 5.80).
Every TCustomDADataSet (such as TMyQuery) descendant has the capability to set update SQL statements using SQLInsert, SQLUpdate and SQLDelete properties.
TMyUpdateSQL is also a promising component for custom update operations.
It seems that the easiest way is to use the BeforePost event, and determine what has to be done using the OldValue and NewValue properties of several fields.
Our win32 application assembles objects from the data in a number of tables in a MySQL relational database. Of such an object, multiple revisions are stored in the database.
When storing multiple revisions of something, sooner or later you'll ask yourself the question if you can visualize the differences between two revisions :) So my question is: what would be a good way to "diff" two such database objects?
Would you do the comparison at the database level? (Doesn't sound like a good idea: too low-level, and too sensitive to the schema).
Would you compare the objects?
Would you write a function that "manually" compares the properties and fields of two objects?
How would you store the diff? In a separate, generic "TDiff" object?
Any general recommendations on how to visualize such things in a user interface?
Advice, or stories about your own experiences with this, are very welcome; thanks a bunch!
Extra info on use case (20090515)
In reply to Antony's comment: this specific application is used to schedule training courses, run by teams of teachers. The schedule of a teacher is stored in various tables in the database, and contains info such as "where does she have to go on which day", "who are her colleagues in the team", etc. This information is spread out over multiple tables.
Once in a while, we "publish" the schedule, so the teachers can see it on a webpage. Each "publication" is a revision, and we'd like to be able to show the users (and later also the teachers) what's changed between two publications --- if anything.
Hope that makes the scenario a bit more tangible :)
Some final remarks
Well, the bounty has come to an end, so I've accepted an answer. If it'd somehow be possible to slice a couple of extra 100's off of my rep and give it to some of the other answers, I would do so without hesitation. All your guys' help has been great, and I am very grateful! ~ Onno 20090519
Just an idea, but would it be worthwhile for you to convert the two object versions being compared to some text format and then comparing these text objects using an existing diff program - like diff for example? There are lots of nice diff programs out there that can offer nice visual representations, etc.
So for example
Text version of Object 1:
first_name: Harry
last_name: Lime
address: Wien
version: 0.1
Text version of Object 2:
first_name: Harry
last_name: Lime
address: Vienna
version: 0.2
The diff would be something like:
3,4c3,4
< address: Wien
< version: 0.1
---
> address: Vienna
> version: 0.2
Assume that a class has 5 known properties - date, time, subject, outline, location. When I look at my schedule, I'm most interested in the most recent (ie current/accurate) version of these properties. It would also be useful for me to know what, if anything, has changed. (As a side note, if the date, time or location changed, I'd also expect to get an email/sms advising me in case I don't check for an updated schedule :-))
I would suggest that the 'diff' is performed at the time the schedule is amended. So, when version 2 of the class is created, record which values have changed, and store this in two 'changelog' fields on the version 2 object (there must already be one parent table that sits atop all your tables - use that one!). One changelog field is 'human readable text' eg 'Date changed from Mon 1 May to Tues 2 May, Time changed from 10:00am to 10:30am'. The second changelog field is a delimted list of changed fields eg 'date,time' To do this, before saving you would loop over the values submitted by the user, compare to current database values, and concatenate 2 strings, one human readable, one a list of field names. Then, update the data and set your concatenated strings as the 'changelog' values.
When displaying the schedule load the current version by default. Loop through the fields in the changelog field list, and annotate the display to show that the value has changed (a * or a highlight, etc). Then, in a separate panel display the human readable change log.
If a schedule is amended more than once, you would probably want to combine the changelogs between version 1 & 2, and 2 & 3. Say in version 3 only the course outline changed - if that was the only changelog you had when displaying the schedule, the change to date and time wouldn't be displayed.
Note that this denormalised approach won't be great for analysis - eg working out which specific location always has classes changed out of it - but you could extend it using an E-A-V model to store the change log.
Doing a comparison at the database level would be good if what you cared about was changes to the database. That makes the most sense if you're trying to design a layer of generic functionality on top of the database itself.
Doing a comparison at the object level would be good if you care about changes to the data. For example, if the data was the input to a program and you were interested in looking at changes in the input to verify that changes to the output were correct.
Your use case doesn't appear to be either of these. You appear to care about the output and want differences from that perspective. If that's the case, I would do differences on the output report (or a pure-text version of it) instead of on the underlying data. You can do that with any off-the-shelf diff tool. To make things easier for your end-users you could parse the diff results and render them as HTML. There are lots of options here: side-by-side with color coding to indicate changes, one document with markup for changes (e.g. red strikethrough for deletions and green for additions), maybe just highlight areas that have changed and use balloons to show the previous/current values on demand.
I've thought about doing database comparisons but never tried to implement it. As you noted, any such attempts are intimately intertwined with the schema.
I have done object-level comparisons. The general algorithm was this:
Do a set comparison on the lists of object IDs. This creates three result groupings: added objects, deleted objects, and objects that live in both sets.
Report the deletions.
Report the additions.
For the things in both sets, do an attribute-by-attribute comparison.
If any differences are found, report the object ID, the attributes that differ, and the respective values. If appropriate, highlight the portion of the attribute value that has changed.
In my case, the comparison algorithms were hand-written to match the object attributes. This gave me control over which attributes were compared and how. A generic comparator might be possible for some cases but would depend on the situation and at least partially on the implementation language.
I've looked into MysQL Diffing a number of times. Unfortunately, there aren't any really good solutions available.
One tool I've tried was mysqldiff (www.mysqldiff.org). mysqldiff is a tool written in PHP which is capable of diffing mysql schemas. Unfortunately, it doesn't do a great job a lot of the time.
MySQL Workbench, MySQLs own SQL IDE provides the option to generate an alter script and I would imagine it does this by performing some kind of diff operation internally.
Aqua Data Studio is another tool that is capable of comparing schemas and outputing a diff of the two. While the ADS diff is quite nice, it does not provide a tool to create an alter script.
If I were writing my own I guess I would write code capable of comparing structure of two tables. Such code could be tuned to be highly sensitive (Ig if column order differs from from version to the next, it's a difference) or more moderately sensitive (Eg Column order is not a major issue, datatypes and lengths are important, as are indices and constraints).
Storage, I'm not to sure. I would look into how a version control system such as Mercurial stores its diff information for revisions and use that to elaborate a method appropriate for the DB.
Finally, for visual output I recommend you take a look at the Aqua Data Stduio compare feature (You can use the Trial version to test this...). Its diff output is pretty good.
My application dbscript compares hierarchical data (database schemas) in a stored procedure, which of course has to compare each field/property of every object with its counterpart. I guess you won't get around that step (unless you have a generic object description model)
As for the UI part of your question, have a look at screenshots to view and select differences.
I would think about some sort of common text representation of the objects and let the texts compare with an existing diffing tool like WinMerge.
I see no need to invent diffing by myself since there are already plenty of nice tools I can use.
In your situation in PostgreSQL I used a difference tables with the schema:
history_columns (
column_id smallint primary key,
column_name text not null,
table_name text not null,
unique (table_name, column_name)
);
create temporary sequence column_id_seq;
insert into history_columns
select nextval('column_id_seq'), column_name, table_name
from information_schema.columns
where
table_name in ('table1','table2','table3')
and table_schema=current_schema() and table_catalog=current_database();
create table history (
column_id smallint not null references history_columns,
id int not null,
change_time timestamp with time zone not null
constraint change_time_full_second -- only one change allowed per second
check (date_trunc('second',change_time)=change_time),
primary key (column_id,id,change_time),
value text
);
And on the tables I used a trigger like this:
create or replace function save_history() returns trigger as
$$
if (tg_op = 'DELETE') then
insert into historia values (
find_column_id('id',tg_relname), OLD.id,
date_trunc('second',current_timestamp),
OLD.id );
[for each column_name] {
if (char_length(OLD.column_name)>0) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
)
}
elsif (tg_op = 'UPDATE') then
[for each column_name] {
if (OLD.column_name is distinct from NEW.column_name) then
insert into history values (
find_column_id(column_name,tg_relname), OLD.id,
OLD.change_time, OLD.column_name
);
end if;
}
end if;
$$ language plpgsql volatile;
create trigger save_history_table1
before update or delete on table1
for each row execute procedure save_history();
This isn't really an answer to the question you asked rather an attempt to re-imagine the problem. Would you consider altering your database and object model to store the aggregate root and a series of deltas? That is, model and store RevisionSets that are collections of Revisions; a Revision is an entity property paired with a value. In a sense this is internalizing the revision structure into your architecture that the other posters are suggesting that you bolt-on to what you already have via "logs".
It's trivial to display the aggregate from the deltas, and even easier to display the deltas as a change history. The fact that you are using a rich client with state and local memory makes this even more compelling. You could very easily display "all the changes since date xxxx" without revisiting the database.
Credit for the basic idea goes to Greg Young and his work with financial data streams, but it is imminently applicable to your problem.
I'm riffing off of what Harry Lime suggested: Output your properties to text format, then hash the results. That way you can compare the hash values and easily flag the data that has been altered. This way you get the best of both worlds as you can visually see differences but programmatically identify differences. With the has you'll have a good source for an index should you want to store and retrieve the deltas.
Given you want to create a UI for this and need to indicate where the differences are, it seems to me you can either go custom or create a generic object comparer - the latter being dependent on the language you are using.
For the custom method, you need to create a class that takes to two instances of the classes to be comparied. It then returns differences;
public class Person
{
public string name;
}
public class PersonComparer
{
public PersonComparer(Person old, Person new)
{
....
}
public bool NameIsDifferent() { return old.Name != new.Name; }
public string NameDifferentText() { return NameIsDifferent() ? "Name changed from " + old.Name + " to " + new.Name : ""; }
}
This way you can use the NameComparer object to create your GUI.
The gereric approach would be much the same, just that you generalize the calls, and use object insepection (getObjectProperty call below) to find differences;
public class ObjectComparer()
{
public ObjectComparer(object old, object new)
{
...
}
public bool PropertyIsDifferent(string propertyName) { return getObjectProperty(old, propertyName) != getObjectProperty(new, propertyName) };
public string PropertyDifferentText(string propertyName) { return PropertyIsDifferent(propertyName) ? propertyName + " " + changed from " + getObjectProperty(old, propertyName) + " to " + getObjectProperty(new, propertyName): ""; }
}
}
I would go for the second, as it makes things really easy to change GUI on needs. The GUI I would try 'yellowing' the differences to make them easy to see - but that depends on how you want to show the differences.
Getting the object to compare would be loading your object with the initial revision and latest revision.
My 2 cents... Not as techy as the database compare stuff already here.
Have you looked at Open Source DiffKit?
www.diffkit.org
I think it does what you want.
Example with Oracle.
Export ordered objects to text with dbms_metadata
Export ordered tables data into CSV or query format
Make big text file
Diff