I am currently working on a dataimport handler that retrieves data from MySQL for quick searching. It consists of the import of a root entity CabinCategoryFares and a few child entities (Cruise, RouteDay, Ship).
This import works, but is very slow as the relation between e.g. CabinCategoryFares and Cruise is many-to-one so there are many identical queries on Cruise fired.
To alleviate this, I am trying to implement the SortedMapBackedCache caching on the child entities. Below a snippet, the original is quite big.
<document name="Cruises">
<entity name="CabinCategoryFare" transformer="RegexTransformer" query="SELECT CabinCategoryFare.cruise_id FROM CabinCategoryFare">
<entity name="Cruise" cacheImpl="SortedMapBackedCache" cacheKey="Cruise.id" cacheLookup="CabinCategoryFare.cruise_id"query="SELECT Cruise.id FROM Cruise">
</entity>
</entity>`
This returns NULL for every field that is read from Cruise. I can tell from the logs that the dataimporthandler is running the Cruise query, but it just isn't returning any results or any errors after that. It seems it isn't able to find any hits on the cacheLookup, but logging in the DIHCacheSupport class is non-existant and I'm at a total loss what's happening, or rather why it isn't happening.
Any thoughts?
Found the problems:
1. Bug in Solr/DIHCacheSupport.java: https://stackoverflow.com/a/21732907/3012497
(cacheKey gets uppercased somewhere in the process, cacheLookup does not so one needs to always use an uppercase cacheLookup)
2. The query for the Cruise entity uses a grouping function (GROUP_CONCAT), but didn't have a GROUP BY clause. This wasn't a problem uncached (because of the WHERE clause) but would still only return one row without where.
3. DIHCacheSupport seems to only work with string keys, int key will cause an exception that does not show up in the logs.
Hope this might save someone a few hours.
Related
I have successfully imported geodata (originally from a shapefile, converted to CSV) into my RavenDB. I am now trying to access the data with a naive, simplistic select (sanity check to see if everything's there) but I can't get any data member values back. Since I am a total RavenDB newbie and haven't created the data myself (programmatically), my approach was to define a class that has the same name as what I find in Raven Studio (minus the automatically-appended plural 's') under Raven-Entity-Name, and to declare each of the data members to be of type string.
The query runs through and retrieves the first 128 results, but all the data members are null. I used this:
List<AdministrativeArea> AdministrativeArea = session
.Query<AdministrativeArea>()
.ToList();
Looking at the entries in Raven Studio, I can see that some of the data member values of the documents are coloured blue (so are probably already type-cast to be integer) but that shouldn't be the cause of ALL the data members showing up as null...
No exceptions are being thrown, and the query list contains elements. What am I doing wrong here ?
Thanks for your help !
The problem was the instantiation of int data members. Even when declaring the int members to be nullable, the problem of empty strings came up and prevented correct instantiation of the objects.
I supose that when CSV imports are used, and in some of the cases the field/data member comes up "empty" (but as a string type), and in other cases, you DO have numbers, then you have to resort to declaring them all to be strings. The only other solution I could think of is to adapt the CSV import code, for which I am still too new to RavenDB to think about.
Im somewhat new to JIRA (skill level novice)
Jira v 6.4.8
JIM v 7.0.12
I am attempting to import issues using the Issue->Import from CSV (bulk create tool)
I have a defined ticket CM-1 as a parent ticket. A generic CSV looks like this
Summary, Parent ID, Issue ID
CM-2, CM-1,
CM-3, CM-1,
CM-4, CM-1,
The first import works successfully and maps as children to CM-1
We attempt to re-import (to update the ~100 fields that changed overnight, not shown in this example for clarity)
Summary, Parent ID, Issue ID
CM-2, CM-1, CM-2
CM-3, CM-1, CM-3
CM-4, CM-1, CM-4
We encounter an issue where new subtasks are created, and nothing is updated.
I have also tried to map the Issue ID found when a I inspect the subtask tickets XML. It looks something like this
<item>
<title>[CM-2] CM2</title>
<link>
https://website.net/browse/CM-2
</link>
<project id="11902" key="CM">Change Management</project>
<description>CM-2 Description</description>
<environment/>
<key id="191147">CM-2</key>
<summary>CM-2</summary>
Specifically the ""
So that would look like
Summary, Parent ID, Issue ID
CM-2, CM-1, 191147
CM-3, CM-1, 191148
CM-4, CM-1, 191149
Once again we see new issues created and no updates performed. I've read the documentation, searched their 'Answer's' asked multiple questions, searched everywhere, but im not seeing any solutions. We literally need to update thousands of tickets, at least once a day - we don't have the manpower to perform this task any other way.
Criteria:
This needs to be able to be performed by an end user or a team lead, they will have access to the bulk import tool (Bulk create) from the Issues-Import issue From CSV link but will not have access to the administrator level external project imports.
I know this isn't an ideal long term solution, and would like to investigate a method to further automate this but we need a solution short term (this).
I appreciate any and all responses. We are importing from a very outdated instance of remedy that's going to remain in use for the next ~3+ years.
Thanks,
Jacob
First of all, if you want to update issues via CSV, you must include an 'Issue Key' column and, during import, map it to the issue key field (CM-1,CM-2 etc. are issue keys in your example). Otherwise every import will generate new issues in JIRA.
The 'Issue ID' and 'Parent ID' columns refer to internal IDs (not issue keys). For adding/updating sub-tasks, you need to figure out the ID of the parent (see below), and in the CSV, write the parent id in the 'parent ID' column, and leave the 'issue ID' value empty. This is explained in the 'Creating sub-tasks' section here.
Figuring out the id of an existing JIRA issue is somewhat tricky (unless you import them from the beginning with your own internal ID which has some sense). An easy way from the GUI is to right click the Edit button and choose 'open in new tab'. Then, the URL of the edit page will include the id (e.g. http://jira-srv/secure/EditIssue!default.jspa?id=91796).
If you need to automate it, you will have to resort to directly querying the database (unless someone else can offer you a better way... as far as i know the REST API does not expose it). See the discussion here if you want details.
I am having problem with BusinessObject Universum and the way it generates queries and consequently yielding the results.
Here is the background: mechanism that is functioning has already been implemented. I was trying to copy the SAME mechanism just to deliver a different field.
Here is the data model: http://tinypic.com/r/ng524g/8
The mechanism that functions is marked with BLUE color. The mechanism that I tried to implement and that is not functioning is marked with RED color.
On business layer I have defined a dimension with aggregate aware function. This function takes first VWF_Party_Collection_A.Collectionstatus_CD column (at the higher level). If a user selects an attribute from contract level, function takes VWF_Contract_Collection_A.Collectionstatus_CD column.
Problem is when I take all attributes from VWD_Kunde_A table and than add the dimension with the mentioned aggregate aware function (ie Collectionstatus_CD), the constructed query from BO side does not make any sense. Here it is:
SELECT
D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Collectionstatus_CD,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID
FROM
D_ATA_MV_FinanceTreasury.VWD_Party_A
LEFT JOIN D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A
ON D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A
WHERE
(
D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY )
AND
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = 102241978
Please notice the strange conctruction in the 'FROM' part (comma has been added). Another strange and unexpected construction is in 'WHERE' part:
( D_ATA_MV_FinanceTreasury.VWD_Party_A.Party_KEY=D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY )
The mechanism that is functioning is joining joins VWD_Kunde_A with VWF_Contract_Collection_A table and yields the correct result.
Now, I have tried to define a dimension without the mentioned aggregate aware function that contains only VWF_Contract_Collection_A.Collectionstatus_CD attribute. When I run the same query BO yields CORRECT results and it generates the CORRECT (expected) query.
This is the query I am expecting:
SELECT
D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Collectionstatus_CD,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY,
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID
FROM
D_ATA_MV_FinanceTreasury.VWD_Kunde_A LEFT JOIN D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A ON D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Namespace_TXT = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Namespace_TXT AND D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Party_KEY = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Party_KEY AND D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Legacy_ID
WHERE
D_ATA_MV_FinanceTreasury.VWD_Kunde_A.Legacy_ID = 102241978
Furthermore, I suspected that it can something to do with contexts. However, I did not find any context for the mechanism that already functions and that I tried to copy. Therefore, I did not implement any context for the mechanisam I am tring to implement.
At this point I am clueless since I tried everything I knew. I would appreciate help.
Thanks!
A.
UPDATE: it seems as aggragate aware function is not functioning... This is how it is defined:
#Aggregate_Aware(D_ATA_MV_FinanceTreasury.VWF_Party_Collection_A.Collectionstatus_CD,D_ATA_MV_FinanceTreasury.VWF_Contract_Collection_A.Collectionstatus_CD)
(I just copied the code from Kreditklasse and adapted it... That makes me even more confused...)
UPDATE_2: it really seems as if aggragate aware is not functioning in my case because I selected all attributes from contract_context and it still jumps to party context. Very confused because THE SAME mechasism is functioning as expected when I select Kreditklasse...
Check the aggregate navigation.
Setting up Aggregate Awareness requires two steps (in addition to correctly defining the joins between the tables, of course):
Define the objects with the Aggregate_Aware function
Set table-object incompatibilities through Actions > Set Aggregate Navigation.
It sounds like the second part is not properly configured: make sure that any objects which require the second table are marked incompatible with the first.
I'm using SA in a script I'll be using to periodically 'copy' a subset of mysql tables from a 'production' replica to dev/test systems. I had written code to simply reflect the source tables and meta.create_all(destination_engine). Due to the nature of FKs, I now know I need to apply use_alter=True to the ForeignKeys on the tables as I create them so that I won't get CircularDependencyErrors or other problems. I need to assume I dont know how many FK's or their names until I go through the metadata.
I'm new to SA and typically Java programmer (as you will tell :D). I tried to change the use_alter attr. iteratively at first:
tablesd = smeta.tables.items()
for tname, t in tablesd:
for c in t.columns:
for fk in c.foreign_keys:
fk.use_alter = True
smeta.create_all(to_engine)
EDIT: It's important to note that create_all() does NOT throw a CircularDependencyError after I set the use_alter property like I do above. If I remove that code, create_all() does not work. It just doesnt seem to be removing the FKs from the create...
This obviously didn't work. I then read Overriding Reflected Columns in the SA docs, sample being:
mytable = Table('mytable', meta,
Column('id', Integer, primary_key=True), # override reflected 'id' to have primary key
Column('mydata', Unicode(50)), # override reflected 'mydata' to be Unicode, autoload=True)
I'd guess reflecting each table individually then adding use_alter=True in the FK definition would work, but I CANNOT assume the names and values or # of FK's/columns. I read a lot about using DeclarativeBase to do something like this, but I'm not really sure how that would work...
How can I take my arbitrary list of tables, reflect them, then Override the use_alter option on their respective foreign keys? Am I thinking about this the wrong way?
The answer ended up being inside the problem (Imagine that...). Although each ForeignKey object has a use_alter value that can be set, Constraints also have a separate property that can be set (I was not able to find this in the API Documentation. After running it through PyDev's Debugger, I noticed the former were being set, but all the keys that had Constraints associated with them were still False. I set them to true thusly:
for fk in table.foreign_keys:
fk.use_alter=True
fk.constraint.use_alter=True
This seemed to produce the SQL I was looking for and tables were created correctly with no CircularDependencyErrors and metadata.sorted_tables seemed to work fine with no errors. I was actually able to refactor my code and do things the RIGHT way!
For anyone looking to do DB-->DB reflecting with complex FKs using SQLAlchemy, this answer and Tyler Lesmann's article are for you.
*UPDATE: * Using this method has passed a peer review and is now being used as production code. Seems to work well!
I am getting an IQueryable from my database and then I am getting another IQueryable from that first one -that is, I am filtering the first one.
My question is -does this affect performance? How many times will the code call the database? Thank you.
Code:
DataContext _dc = new DataContext();
IQueryable offers =
(from o in _dc.Offers
select o);
IQueryable filtered =
(from o in offers
select new { ... } );
return View(filtered);
The code you have given will never call the database since you're never using the results of the query in any code.
IQueryable collections aren't filled until you iterate through them...and you're not iterating through anything in that code sample (ah, the beauty of lazy initialization).
That also means that each of those statements will be executed as its own query against the database which results in no performance cost over doing two completely independent queries.
SO is not a replacement for developer tools. There are many good free tools able to tell you exactly what this code translates into and how it works. Use Reflector on this method and look at what code is generated and reason for yourself what is going on from there.