JIRA Importing/Updating subtasks - csv

Im somewhat new to JIRA (skill level novice)
Jira v 6.4.8
JIM v 7.0.12
I am attempting to import issues using the Issue->Import from CSV (bulk create tool)
I have a defined ticket CM-1 as a parent ticket. A generic CSV looks like this
Summary, Parent ID, Issue ID
CM-2, CM-1,
CM-3, CM-1,
CM-4, CM-1,
The first import works successfully and maps as children to CM-1
We attempt to re-import (to update the ~100 fields that changed overnight, not shown in this example for clarity)
Summary, Parent ID, Issue ID
CM-2, CM-1, CM-2
CM-3, CM-1, CM-3
CM-4, CM-1, CM-4
We encounter an issue where new subtasks are created, and nothing is updated.
I have also tried to map the Issue ID found when a I inspect the subtask tickets XML. It looks something like this
<item>
<title>[CM-2] CM2</title>
<link>
https://website.net/browse/CM-2
</link>
<project id="11902" key="CM">Change Management</project>
<description>CM-2 Description</description>
<environment/>
<key id="191147">CM-2</key>
<summary>CM-2</summary>
Specifically the ""
So that would look like
Summary, Parent ID, Issue ID
CM-2, CM-1, 191147
CM-3, CM-1, 191148
CM-4, CM-1, 191149
Once again we see new issues created and no updates performed. I've read the documentation, searched their 'Answer's' asked multiple questions, searched everywhere, but im not seeing any solutions. We literally need to update thousands of tickets, at least once a day - we don't have the manpower to perform this task any other way.
Criteria:
This needs to be able to be performed by an end user or a team lead, they will have access to the bulk import tool (Bulk create) from the Issues-Import issue From CSV link but will not have access to the administrator level external project imports.
I know this isn't an ideal long term solution, and would like to investigate a method to further automate this but we need a solution short term (this).
I appreciate any and all responses. We are importing from a very outdated instance of remedy that's going to remain in use for the next ~3+ years.
Thanks,
Jacob

First of all, if you want to update issues via CSV, you must include an 'Issue Key' column and, during import, map it to the issue key field (CM-1,CM-2 etc. are issue keys in your example). Otherwise every import will generate new issues in JIRA.
The 'Issue ID' and 'Parent ID' columns refer to internal IDs (not issue keys). For adding/updating sub-tasks, you need to figure out the ID of the parent (see below), and in the CSV, write the parent id in the 'parent ID' column, and leave the 'issue ID' value empty. This is explained in the 'Creating sub-tasks' section here.
Figuring out the id of an existing JIRA issue is somewhat tricky (unless you import them from the beginning with your own internal ID which has some sense). An easy way from the GUI is to right click the Edit button and choose 'open in new tab'. Then, the URL of the edit page will include the id (e.g. http://jira-srv/secure/EditIssue!default.jspa?id=91796).
If you need to automate it, you will have to resort to directly querying the database (unless someone else can offer you a better way... as far as i know the REST API does not expose it). See the discussion here if you want details.

Related

SSIS - Loop Through Active Directory

Disclaimer: new to SSIS and Active Directory
I have a need to extract all users within a particular Active Directory (AD) domain and import them into Excel. I have followed this: https://www.itnota.com/query-ldap-in-visual-studio-ssis/ in order to create my SSIS package. My SQL is:
LDAP://DC=JOHN,DC=JANE,DC=DOE;(&(objectCategory=person)(objectClass=user)(name=a*));Name,sAMAccountName
As you know there is a 1,000 row limit when pulling from the AD. In my SQL I currently have (name=a*) to test the process and it works. I need to know how to setup a loop with variables to pull all records and import into Excel (or whatever you experts recommend). Also, how do I know what the other field names are that are available to pull?
Thanks in advance.
How do I see what's in Active Directory
Tool recommendations are off topic for the site but a tool that you can download, no install required, is AD Explorer It's a MS tool that allows you to view your domain. Highly recommend people that need to see what's in AD use something like this as it shows you your basic structure.
What's my domain controller?
Start -> Command Prompt
Type set | find /i "userdnsdomain" and look for USERDNSDOMAIN and put that value in the connect dialog and I save it because I don't want to enter this every time.
Search/Find and then look yourself up. Here I'm going to find my account by using my sAMAccountName
The search results show only one user but there could have been multiples since I did a contains relationship.
Double clicking the value in the bottom results section causes the under pane window to update with the details of the search result.
This is nice because while the right side shows all the properties associated to my account, it's also updated the left pane to navigate to the CN. In my case it's CN=Users but again, it could be something else in your specific environment.
You might discover an interesting categorization for your particular domain. At a very large client, I discovered that my target users were all under a CN
(Canonical Name, I think) so I could use that in my AD query.
There are things you'll see here that you sure would like to bring into a data flow but you won't be able to. Like the memberOf that's a complex type and there's no equivalent in the data flow data types for it. I think Integer8 is also something that didn't work.
Loop the loop
The "trick" here is that we'll need to take advantage of the
The name of the AD provider has changed since I last looked at this. In VS 2017, I see the OLE DB Provider name as "OLE DB Provider for Microsoft Directory Service"
Put in your query and you should get results back. Let that happen so the metadata is set.
An ADO.NET source does not support parameterization as the OLE DB does. However, you can apply an Expression on the Data Flow which surfaces the component and that's what we'll do.
Click out of the Data Flow and back into the Control Flow and right click on the Data Flow and select Properties. In that properties window, find Expressions and click the ellipses ... Up pops the Property Expressions Editor
Find the ADO.NET source under Property and in the Expressions section, click the Ellipses.
Here, we'll use your same source query just to prove we're doing the right things
"LDAP://DC=JOHN,DC=JANE,DC=DOE;(&(objectCategory=person)(objectClass=user)(name=" + "a" + "*));Name,sAMAccountName"
We're doing string building here so the problem we're left to solve is how we can substitute something for the "a" in the above query.
The laziest route would be to
Create an SSIS variable of type String called CurrentLetter and initialize it to a
Update the expression we just created to be "LDAP://DC=JOHN,DC=JANE,DC=DOE;(&(objectCategory=person)(objectClass=user)(name=" + #[USer::CurrentLetter] + "*));Name,sAMAccountName"
Add a Foreach Loop Container (FELC) to your Control Flow.
Configure the FELC with an enumerator of "Foreach Item Enumerator"
Click the Columns...
Click Add (this results in Column 0 with data type String) so click OK
Fill the collection with each letter of the alphabet
In the Variable Mappings tab, assign Variable User::CurrentLetter to Index 0
Click OK
Old blog posts on the matter because I like clicks
https://billfellows.blogspot.com/2011/04/active-directory-ssis-data-source.html
http://billfellows.blogspot.com/2013/11/biml-active-directory-ssis-data-source.html

SSIS Errors for simple CSV Data Flow

Sorry to darken your day with my troubles, but SSIS has broken me! I am new to SSIS and I just seem to be misunderstanding it.
For background: I have a few versions of a basic package that includes a Foreach Loop container and a Data Flow with a few Derived Columns that imports CSV files into a SQL Server Staging table. It is very straightforward and does include an Execute SQL task and a File Move but those work fine. The issues are with the Foreach loop and the Data Flow.
I have one version of this package (let’s call it “A”) that seemed to be working fine. It would process multiple files in a folder, insert records into the staging table, properly execute the SQL Statements, and move the files to Archive. Everything seemed fine until I carefully QA’d the process. Turns out it was duplicating the data from one file, and never importing the data from a second Source File! Yet, the second/dupe round of data included the Source Filename (via a derived column) of the second file (but the data from the first). So it looked like I had successfully processed BOTH files until I looked at the actual data and saw that none of the values from the second source file were ever written to the Staging table.
Once I discovered this, I figured that the problem was in the Foreach loop and how I setup the different file path & name variables. So, I decided to try to make a new version of the package. I started by copying package A and created package B. In B, I deleted the Source Connection manager and created a new Connection Manager along with all new file & path variables. I then tried to cleanup/fix/replace various elements in my Data Flow and Foreach loop. In the process, I discovered that the Advanced Mappings from A – which DID work – were virtually all setup as String (even the Currency and Date columns). That did not seem right, so I modified each source money column by changing to data type Currency, and changed each date-related column to data type Date.
What followed has been dozens and dozens of Errors and I cannot get Package B to run. I have even changed all of the B data types back to String (mirroring the setup in Package A which DID work). But, still no joy.
This leads me to ask a few questions to those of you smarter than I:
1) Why can’t SSIS interpret Source CSV data using the proper data type? I.e. why do I need to set every Input column as a STRING when some columns are clearly & completely Numeric, Currency or Dates? (Yes, the Source CSV files are VERY clean – most don’t even have NULLS)
a. When I do change the Advanced mapping for a date-related Source column to Date, I get the ever present error message: [Flat File Source [30]] Error: Data conversion failed. The data conversion for column "Settle Date" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
2) When I reset the data types back to String in package B, I still get errors – usually Truncation errors (and Yes – I have adjusted the length to 250 in one of these columns).
a. Error Message: "The value could not be converted because of a potential loss of data.".
b. When I reset the Mappings to ignore the column (as a test), it throws a similar error at the next column.
3) Any ideas why Package A would dupe a file’s data and not process the second file, yet throw no errors and move both to Archive?
4) Why does the Data Viewer appear to have parsing errors (it shows data in the wrong columns) but when you use the Copy data feature in the data viewer and paste it into Excel, all of the data lines up perfectly?
5) Are there any tips & tricks that a rookie SSIS user needs to understand and which might not be apparent through the documentation and searching web articles as well as this site?
I can provide further details if they will help, but these packages are really very simple and should not be causing me this much frustration.
THANKS for any insights.
DGP
Wow seems like you have a lot of ssis issues... I think the reason for the same file being extracted is because of the the way your 'variable mappings' is defined.
Have you had a look and followed this guide:
https://www.simple-talk.com/sql/ssis/ssis-basics-introducing-the-foreach-loop-container/
Hope this helps.
Shaheen
Thanks Tab & Shaheen,
To all SSIS rookies - please learn from my mistakes!
It appears that my issue was actually in how I identified the TEXT QUALIFIER in the Connection Manager. I had entered "" and that was causing problems with how my columns were being parsed. The parsing issues caused unexpected values to appear in some of the columns and that was causing the errors in the package.
When I tried changing the the Text Qualifier to only ONE double quote - " - the whole thing worked!
As I mentioned - and as Shaheen suspected - my initial issues with the duplicate processing was probably due to how I setup the foreach loop. I had already fixed that, bit was still getting errors until I fixed the Text Qualifier.
I have only tested it a few times but it looks like that was the issue.
Thanks for the contributions.
DGP

xpath finds element in developers console but not in scrapy.response

I'm trying to scrape the price from the first ticket here page using this xpath:
'.//*[#class="price"]/text()'
This works in the developer's console, but not when I run it in the scrapy shell using response.xpath. I have also tried to following in the shell:
'.//*[#class="initial"]/div[#class="price"]/text()'
and
'//*[#id="tVB901769989"]/div[1]/div[4]' (although I don't think that the id property can be used in the shell like this).
Is there something wrong with the xpaths that I've used, or is there some thing different with the way the page works? Any help would be appreciated. Thanks!
this happens because you are checking at different requests, the page you see doesn't have the information you need inside that file, but it gets it dynamically, in this case from: www.vividseats.com/javascript/tickets.shtml?productionId=1771684
There you can check the prices on json format, I think this is for one item:
{
"s":"Section 111",
"r":"8-22",
"q":"4",
"p":"692.00",
"i":"VB782041491",
"d":"111",
"n":"Zone Seating. The seller is committing to procure these tickets for you upon receipt of your order. After you place your order and your order is confirmed, we guarantee that your tickets will be within the listed zone
or section listed or one comparable and that you will receive these tickets in time for the event or
your money back. Orders exceeding four tickets may be split up into different rows within the requested
zone or section.",
"f":"0",
"l":"Section 111",
"g":"0",
"e":"0",
"h":"07/21/15",
"t":"0",
"v":"",
"c":"84352",
"z":"1",
"rhdn":"0",
"ind":"0",
"sd":"0"
}
where p contains the price.

Solr dataimport handler cacheImpl does not retrieve rows

I am currently working on a dataimport handler that retrieves data from MySQL for quick searching. It consists of the import of a root entity CabinCategoryFares and a few child entities (Cruise, RouteDay, Ship).
This import works, but is very slow as the relation between e.g. CabinCategoryFares and Cruise is many-to-one so there are many identical queries on Cruise fired.
To alleviate this, I am trying to implement the SortedMapBackedCache caching on the child entities. Below a snippet, the original is quite big.
<document name="Cruises">
<entity name="CabinCategoryFare" transformer="RegexTransformer" query="SELECT CabinCategoryFare.cruise_id FROM CabinCategoryFare">
<entity name="Cruise" cacheImpl="SortedMapBackedCache" cacheKey="Cruise.id" cacheLookup="CabinCategoryFare.cruise_id"query="SELECT Cruise.id FROM Cruise">
</entity>
</entity>`
This returns NULL for every field that is read from Cruise. I can tell from the logs that the dataimporthandler is running the Cruise query, but it just isn't returning any results or any errors after that. It seems it isn't able to find any hits on the cacheLookup, but logging in the DIHCacheSupport class is non-existant and I'm at a total loss what's happening, or rather why it isn't happening.
Any thoughts?
Found the problems:
1. Bug in Solr/DIHCacheSupport.java: https://stackoverflow.com/a/21732907/3012497
(cacheKey gets uppercased somewhere in the process, cacheLookup does not so one needs to always use an uppercase cacheLookup)
2. The query for the Cruise entity uses a grouping function (GROUP_CONCAT), but didn't have a GROUP BY clause. This wasn't a problem uncached (because of the WHERE clause) but would still only return one row without where.
3. DIHCacheSupport seems to only work with string keys, int key will cause an exception that does not show up in the logs.
Hope this might save someone a few hours.

Can I insert deserialized JSON SObjects from another Salesforce org into my org?

We have the need to clone a complex data structure from one org to another. This contains a series of custom SObjects, including parents and children.
The flow would be the following. On origin org, we just JSON.serialize the list of SObjects we want to send. Then, on target org, we can JSON.deserialize that list of objects. So far so good.
The problem is that we cannot insert those SObjects directly, since they contain the origin org's IDs and Salesforce won't let us insert objects that already have Ids.
The solution we found is to manually insert the object hierarchy, maintaining a map of originId > targetId and fixing the relationships manually. However, we wonder if Salesforce provides an easier way to do such a thing, or someone knows a better way to do it.
Is there an embedded way in Salesforce to do this? Or are we stuck into a tedious manual process?
List.deepClone() call with preserveIds = false might deal with one problem, then:
Consider using upsert operation to build the relationships for you.
Upsert not only can prevent duplicates but also maintain hierarchies.
You'll need an external Id field on the parent, not on the children though.
/* Prerequisites to run this example succesfully:
- having a field Account_Number__c that will be marked as ext. id (you can't mark the standard one sadly)
- having an account in the DB with such value (but the point of the example is to NOT query for it's Id)
*/
Account parent = new Account(Account_Number__c = 'A364325');
Contact c = new Contact(LastName = 'Test', Account = parent);
upsert c;
System.debug(c);
System.debug([SELECT AccountId, Account.Account_Number__c FROM Contact WHERE Id = :c.Id]);
If you're not sure whether it will work for you - play with Data Loader's upsert function, might help to understand.
If you have more than 2 level hierarchy on the same sObject type I think you'd still have to upsert them in correct order though (or use Database.upsert version and keep on rerunning it for failed ones).