I need an idea/tip how to use DbUnit to assert IDs, generated by a database (e.g. MySQL's auto increment column). I have very simple case, which yet, at the moment, I find problematic:
2 tables: main and related. main.id column is an auto-increment. Related table has a foreign key to it: related.main_id -> main.id. In my test case my application does insert multiple entries into both tables, so the dataset looks similar to this:
<dataset>
<main id="???" comment="ABC" />
<main id="???" comment="DEF" />
<related id="..." main_id="???" comment="#1 related to ABC" />
<related id="..." main_id="???" comment="#2 related to ABC" />
<related id="..." main_id="???" comment="#3 related to DEF" />
<related id="..." main_id="???" comment="#4 related to DEF" />
</dataset>
As the order, how the inserts will be performed is unclear - I cannot simply clear/truncate the table before the test and use predefined IDs in advance (e.g. "ABC" entry will come at first so it gets ID 1 and "DEF" as 2nd - gets 2). If I write test such way - this will be wrong - with a bit of luck sometimes it may work and in other cases not.
Is there a clean way how test such cases? As I still want to assert that entries were created and linked properly in DB, not only that they exists (if I would simply ignore the auto-increment columns).
Based on the comments of the question, I am answering my own question, so this may help others, looking for similar solution.
After all we did skip asserting generated IDs, as they were not really interesting for us. What we actually did want to check is that the entries between main and related tables are "properly linked". To achieve this, in our unit test we did created the the dataset using query, joining both tables:
SELECT main.comment, related.comment AS related_comment
FROM main, related
WHERE main.id = related.main_id
Then we assert, that dataset produced by this query matches statically defined dataset:
<dataset>
<result comment="ABC" related_comment="#1 related to ABC" />
<result comment="ABC" related_comment="#2 related to ABC" />
<result comment="DEF" related_comment="#3 related to DEF" />
<result comment="DEF" related_comment="#4 related to DEF" />
</dataset>
When the datasets are matching, we can assume, that entries were "linked properly".
Maybe you let dbunit sort your table main by id and table related by id automatically. Since the absolute number of rows are known in advantage this should solve your problem.
DBUnit allows sorting with org.dbunit.dataset.SortedTable.SortedTable which needs a table an a list of colums which should be sorted. JavaDoc of SortedTable
Related
Im somewhat new to JIRA (skill level novice)
Jira v 6.4.8
JIM v 7.0.12
I am attempting to import issues using the Issue->Import from CSV (bulk create tool)
I have a defined ticket CM-1 as a parent ticket. A generic CSV looks like this
Summary, Parent ID, Issue ID
CM-2, CM-1,
CM-3, CM-1,
CM-4, CM-1,
The first import works successfully and maps as children to CM-1
We attempt to re-import (to update the ~100 fields that changed overnight, not shown in this example for clarity)
Summary, Parent ID, Issue ID
CM-2, CM-1, CM-2
CM-3, CM-1, CM-3
CM-4, CM-1, CM-4
We encounter an issue where new subtasks are created, and nothing is updated.
I have also tried to map the Issue ID found when a I inspect the subtask tickets XML. It looks something like this
<item>
<title>[CM-2] CM2</title>
<link>
https://website.net/browse/CM-2
</link>
<project id="11902" key="CM">Change Management</project>
<description>CM-2 Description</description>
<environment/>
<key id="191147">CM-2</key>
<summary>CM-2</summary>
Specifically the ""
So that would look like
Summary, Parent ID, Issue ID
CM-2, CM-1, 191147
CM-3, CM-1, 191148
CM-4, CM-1, 191149
Once again we see new issues created and no updates performed. I've read the documentation, searched their 'Answer's' asked multiple questions, searched everywhere, but im not seeing any solutions. We literally need to update thousands of tickets, at least once a day - we don't have the manpower to perform this task any other way.
Criteria:
This needs to be able to be performed by an end user or a team lead, they will have access to the bulk import tool (Bulk create) from the Issues-Import issue From CSV link but will not have access to the administrator level external project imports.
I know this isn't an ideal long term solution, and would like to investigate a method to further automate this but we need a solution short term (this).
I appreciate any and all responses. We are importing from a very outdated instance of remedy that's going to remain in use for the next ~3+ years.
Thanks,
Jacob
First of all, if you want to update issues via CSV, you must include an 'Issue Key' column and, during import, map it to the issue key field (CM-1,CM-2 etc. are issue keys in your example). Otherwise every import will generate new issues in JIRA.
The 'Issue ID' and 'Parent ID' columns refer to internal IDs (not issue keys). For adding/updating sub-tasks, you need to figure out the ID of the parent (see below), and in the CSV, write the parent id in the 'parent ID' column, and leave the 'issue ID' value empty. This is explained in the 'Creating sub-tasks' section here.
Figuring out the id of an existing JIRA issue is somewhat tricky (unless you import them from the beginning with your own internal ID which has some sense). An easy way from the GUI is to right click the Edit button and choose 'open in new tab'. Then, the URL of the edit page will include the id (e.g. http://jira-srv/secure/EditIssue!default.jspa?id=91796).
If you need to automate it, you will have to resort to directly querying the database (unless someone else can offer you a better way... as far as i know the REST API does not expose it). See the discussion here if you want details.
I make a query (with \yii\db\ActiveQuery) with joins, and some fields in "where" clause become ambigous. Is there a nice and short way to specify the name of the current model`s (ActiveRecord) table (from which one the ActiveQuery was instantiated) before the column name? So I can use this all the time in all cases and to make it short.
Don't like doing smth like this all the time (especially in places where there're no joins, but just to be able to use those methods with joins if it will be needed):
// in the ActiveQuery method initialized from the model with tableName "company"
$this->andWhere(['{{%company}}.`company_id`' => $id]);
To make the "named scopes" to work for some cases with joins..
Also, what does the [[..]] mean in this case, like:
$this->andWhere(['[[company_id]]' => $id]);
Doesn't seem to work like to solve the problem described above.
Thx in advance!
P.S. sorry, don't have enough reputation to create tag yii2-active-query
to get real table name :
Class :
ModelName::getTableSchema()->fullName
Object :
$model::getTableSchema()->fullName
Your problem is a very common one and happens most often with fields liek description, notes and the like.
Solution
Instead of
$this->andWhere(['description'=>$desc]);
you simply write
$this->andWhere(['mytable.description'=>$desc]);
Done! Simply add the table name in front of the field. Both the table name and the field name will be automatically quoted when the raw SQL is created.
Pitfall
The above example solves your problem within query classes. One I struggled over and took me quite some time to solve was a models relations! If you join in other tables during your queries (more than just one) you could also run into this problem because your relation-methods within the model are not qualified.
Example: If you have three tables: student, class, and teacher. Student and teacher probably are in relation with class and both have a FK-field class_id. Now if you go from student via class to teacher ($student->class->teacher). You also get the ambigous-error. The problem here is that you should also qualify your relation definitions within the models!
public function getTeacher()
{
return $this->hasOne(Teacher::className(), ['teacher.id' => 'class.teacher_id']);
}
Proposal
When developing your models and query-classes always fully qualify the fields. You will never ever run into this problem again...that was my experience at least! I actually created my own model-gii-template. So this gets solved automatically now ;)
Hope it helped!
I am currently working on a dataimport handler that retrieves data from MySQL for quick searching. It consists of the import of a root entity CabinCategoryFares and a few child entities (Cruise, RouteDay, Ship).
This import works, but is very slow as the relation between e.g. CabinCategoryFares and Cruise is many-to-one so there are many identical queries on Cruise fired.
To alleviate this, I am trying to implement the SortedMapBackedCache caching on the child entities. Below a snippet, the original is quite big.
<document name="Cruises">
<entity name="CabinCategoryFare" transformer="RegexTransformer" query="SELECT CabinCategoryFare.cruise_id FROM CabinCategoryFare">
<entity name="Cruise" cacheImpl="SortedMapBackedCache" cacheKey="Cruise.id" cacheLookup="CabinCategoryFare.cruise_id"query="SELECT Cruise.id FROM Cruise">
</entity>
</entity>`
This returns NULL for every field that is read from Cruise. I can tell from the logs that the dataimporthandler is running the Cruise query, but it just isn't returning any results or any errors after that. It seems it isn't able to find any hits on the cacheLookup, but logging in the DIHCacheSupport class is non-existant and I'm at a total loss what's happening, or rather why it isn't happening.
Any thoughts?
Found the problems:
1. Bug in Solr/DIHCacheSupport.java: https://stackoverflow.com/a/21732907/3012497
(cacheKey gets uppercased somewhere in the process, cacheLookup does not so one needs to always use an uppercase cacheLookup)
2. The query for the Cruise entity uses a grouping function (GROUP_CONCAT), but didn't have a GROUP BY clause. This wasn't a problem uncached (because of the WHERE clause) but would still only return one row without where.
3. DIHCacheSupport seems to only work with string keys, int key will cause an exception that does not show up in the logs.
Hope this might save someone a few hours.
Basically, if user uploads same c-cda document again or other documents containing same entries of like medications, vitals, allergies, surgeries, etc than I want to make sure they do not get duplicated in database and want to skip those from inserting again.
Each entry inside an HL7 CDA could have an id attribute, which definition form HL7 V3 RIM is:
3.1.1.3
Act.id :: SET (0..N)
Definition:A unique identifier for the Act.
Use it in order to uniquely identify you entries, and avoid duplicates.
This element is not mandatory, but if you are implementing C-CDA, this template for substance administration specifies that this element is mandatory, so you should ask document sender to inform it. Here is a substance administration example form C-CDA:
<substanceAdministration classCode="SBADM" moodCode="EVN">
<templateId root="2.16.840.1.113883.10.20.22.4.16"/>
<id root="cdbd33f0-6cde-11db-9fe1-0800200c9a66"/>
<text>
<reference value="#med1/>
Proventil 0.09 MG/ACTUAT inhalant solution, 2 puffs QID PRN wheezing
</text>
<statusCode code="completed"/>
<effectiveTime xsi:type="IVL_TS">
<low value="20110301"/>
<high value="20120301"/>
</effectiveTime>
<effectiveTime xsi:type="PIVL_TS" institutionSpecified="true" operator="A">
<period value="6" unit="h"/>
</effectiveTime>
...
MartÃ
martipamies#vico.org
I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!