Problems constructing a MasterCard Gen AC command - emv

I'm trying to construct a generate AC command for a Mastercard contactless card. I believe I have correctly read the information the card requires for this, from the CDOL1, and send it, with the correct length in a Generate AC command, however the correct is replying with a 6700 (wrong length command). Any advice on what the problem is would be very much appreciated.
The CDOL1 provide by the card is:
8C | len:27 Card Risk Management Data Object List 1 (CDOL1)
9F02 | len:06 Amount, Authorised (Numeric)
9F03 | len:06 Amount, Other (Numeric)
9F1A | len:02 Terminal Country Code
95 | len:05 Terminal Verification Results
5F2A | len:02 Transaction Currency Code
9A | len:03 Transaction Date
9C | len:01 Transaction Type
9F37 | len:04 Unpredictable Number
9F35 | len:01 Terminal Type
9F45 | len:02 Data Authentication Code
9F4C | len:08 ICC Dynamic Number
9F34 | len:03 Cardholder Verification Method (CVM) Results
9F21 | len:03 Transaction Time HHMMSS
9F7C | len:14 Customer Exclusive Data (CED)
For which I'm providing the values
Amount, Authorised (Numeric) 000000000100
Amount Other (Numeric) 000000000000
Terminal Country Code 0826
Terminal Verification Results 0000000000
Transaction Currency Code 0826
Transaction Date 190819
Transaction Type 00
Unpredictable Number 3357A30A
Terminal Type 21
Data Authentication Code 0000
ICC Dynamic Number 0000000000000000
Cardholder Verification Method (CVM) Results 1F0302
Transaction Time 120505
Customer Exclusive Data (CED) 0000000000000000000000000000
which is a total of 60 (3C) bytes, adding the Gen AC header and the Le my message is then:
80AE50003C000000000100000000000000082600000000000826190819003357A30A21000000000000000000001F03021205050000000000000000000000000000
However, the card always returns 6700, which as I understanding it indicates the wrong length. I've tried some other values and dropping the Le, however, I don't see how these could affect the length.
A full trace of my run (with some values Xed out) is:
Sending 00A404000E325041592E5359532E4444463031
response length: 62
status: 9000
6F | len:3A File Control Information (FCI) Template
84 | len:14 DF Name: 325041592E5359532E4444463031
A5 | len:28 Proprietary Information
BF0C | len:25 File Control Information (FCI) Issuer Discretionary Data
61 | len:23 Directory Entry
4F | len:7 Application Identifier (AID): A0000000041010
50 | len:10 Application Label: 4D617374657243617264
87 | len:1 Application Priority Indicator: 01
9F0A | len:8 Application Selection Registered Proprietary Data list: 0001050400000000
Sending 00A4040007A0000000041010
response length: 81
status: 9000
6F | len:4D File Control Information (FCI) Template
84 | len:7 DF Name: A0000000041010
A5 | len:42 Proprietary Information
50 | len:10 Application Label: 4D617374657243617264
9F12 | len:10 Application Preferred Name: 4D617374657243617264
87 | len:1 Application Priority Indicator: 01
9F11 | len:1 Issuer Code Table Index: 01
5F2D | len:2 Language Preference: 656E
BF0C | len:1A File Control Information (FCI) Issuer Discretionary Data
9F4D | len:2 Log Entry: 0B0A
9F6E | len:7 Form Factor Indicator (qVSDC): 08260000303000
9F0A | len:8 Application Selection Registered Proprietary Data list: 0001050400000000
Sending 80A8000002830000
response length: 22
status: 9000
77 | len:12 Response Message Template Format 2
82 | len:2 Application Interchange Profile: 1980
94 | len:12 Application File Locator: 080101001001010120010200
Sending 00B2011400
response length: 171
status: 9000
70 | len:81 Record Template
9F42 | len:2 Application Currency Code: 0826
5F25 | len:3 Application Effective Date YYMMDD: XXXXXX
5F24 | len:3 Application Expiration Date YYMMDD: XXXXXX
5A | len:8 Application Primary Account Number (PAN): XXXXXXXXXXXXXXXX
5F34 | len:1 Application Primary Account Number (PAN) Sequence Number: 00
9F07 | len:2 Application Usage Control: FF00
9F08 | len:2 Application Version Number: 0002
8C | len:27 Card Risk Management Data Object List 1 (CDOL1)
9F02 | len:06 Amount, Authorised (Numeric)
9F03 | len:06 Amount, Other (Numeric)
9F1A | len:02 Terminal Country Code
95 | len:05 Terminal Verification Results
5F2A | len:02 Transaction Currency Code
9A | len:03 Transaction Date
9C | len:01 Transaction Type
9F37 | len:04 Unpredictable Number
9F35 | len:01 Terminal Type
9F45 | len:02 Data Authentication Code
9F4C | len:08 ICC Dynamic Number
9F34 | len:03 Cardholder Verification Method (CVM) Results
9F21 | len:03 Transaction Time HHMMSS
9F7C | len:14 Customer Exclusive Data (CED)
8D | len:0C Card Risk Management Data Object List 2 (CDOL2)
91 | len:0A Issuer Authentication Data
8A | len:02 Authorisation Response Code
95 | len:05 Terminal Verification Results
9F37 | len:04 Unpredictable Number
9F4C | len:08 ICC Dynamic Number
8E | len:14 Cardholder Verification Method (CVM) List: 000000000000000042031E031F03
9F0D | len:5 Issuer Action Code - Default: B450840000
9F0E | len:5 Issuer Action Code - Denial: 0000000000
9F0F | len:5 Issuer Action Code - Online: B470848000
5F28 | len:2 Issuer Country Code: 0826
9F4A | len:1 Static Data Authentication Tag List: 82
57 | len:19 Track 2 Equivalent Data: ...
Sending 00B2010C00
response length: 112
status: 9000
70 | len:6C Record Template
9F6C | len:2 Card Transaction Qualifiers (CTQ): 0001
9F62 | len:6 PCVC3 Track1 location: 000000380000
9F63 | len:6 PUNATC Track1 location: 00000000E0E0
56 | len:43 Track 1 Equivalent Data: ...
9F64 | len:1 NATC Track1 location: 03
9F65 | len:2 PCVC3 Track2 location: 000E
9F66 | len:2 PUNATC Track2 location: 0E70
9F6B | len:19 Track1 data: ...
9F67 | len:1 NATC Track2 location: 03
Sending 00B2012400
response length: 189
status: 9000
70 | len:81 Record Template
9F47 | len:1 ICC Public Key Expo: 03
9F46 | len:176 ICC Public Key Cert: 8A4...C3
Sending 00B2022400
response length: 229
status: 9000
70 | len:81 Record Template
8F | len:1 Certification Authority Public Key Index: 05
9F32 | len:1 Issuer Public Key Exponent: 03
92 | len:36 Issuer Public Key Remainder: EEAAE75B30426DEB86F113DFD1B53E7D98D6456172ECFA87F83A3E7733341572B1AC1CE9
90 | len:176 Issuer Public Key Certificate: 3E9C8727E2...2FAF87606
Sending 80AE50003C000000000100000000000000082600000000000826190819003357A30A21000000000000000000001F03021205050000000000000000000000000000
response length: 2
Error: 6700
I would have expected the reply to the final command to be the AC, or an error with one of the inputs, not a 6700 length error message.

You read the 9F7C length wrong. It's 0x14 not 14 decimal. The length for this card application should be 66 (0x42).
What you might want to also know is that ICCPK cert contains PAN as well so you haven't Xed out enough (516273******2854). Since you are trying to make a CDA transaction, you should learn more about that during public key retrieval.

Related

Web Intelligence : line chart method

I'm trying to make a line chart on WebI 4.2 Support Pack 4 Compilation : 14.2.4.2410.
I have an array with my number of order for each month.
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 165 | 221 | 150 | 214 | 105 | 18 | 115 | 15 | 201 | 26 | 102 | 101 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
For exemple I use this function in my measures changing the month at the end to get my total of orders.
=Number([Id]) In ([Date]) Where (Month(ToDate([Date]; "dd/MM/yy")) ="january")
How can I get a line chart with my months in x-coordinate and my number of orders in y-coordinate ? I think that I'm totally wrong in my method cause I really don't find how to use line chart correctly.
Should I have my months as dimension variables ?
I do not have the right to post pictures yet to illustrate what I want and sorry for my english level.
I believe you are making this more difficult that it needs to be. It seems like you do not have the Month name in your data. So create a variable to get that. If the object containing your date is a Date data type it would look like this...
Month=Month([Date])
If your date is a string you would do something like this...
Month=Month(ToDate([Date]; "dd/MM/yy"))
If you don't have a measure for your number of orders you will need a variable counting them...
Number of Orders=Count(ID)
Then create a table with Month and Number of Orders. Create a custom sort order if you want the months in chronological order. You can easily create a line chart by right-clicking on your table and choosing "Turn Into > Line Chart".
I did this with a Calendar table I have with a count of the number of days in each month in 2019.

Fix the order of auto_incremented keys mysql database

I have a MySQL table which has a primary key book_num which is being auto_incremented, problem is that I had deleted some books from it and then inserted more books and so through an interface, so if the book with book_num = 4 was deleted now the table is missing book_num 4. so it would be like 1, 2, 3 and then 5 how can i make it so that missing spot just get covered like since there are 4 rows with column book_num 1, 2, 3, 4, not skipping the 4 like 1, 2, 3, 5. I can duplicate a table just like the original one and then insert all values from that table to the new table so there no skipping like that but I was wondering if there is a MySQL command to do that. Here is the actual table which is missing the number 79 because I deleted that book.
+----------+----------------------------------------------------------+
| book_num | book_name |
+----------+----------------------------------------------------------+
| 74 | MySQL Cookbook |
| 75 | PHP Cookbook |
| 76 | Learn to Program with Python |
| 77 | THE MIT ENCYCLOPEDIA OF THE COGNITIVE SCIENCES |
| 78 | Microsoft Visual C++/CLI Step by Step |
| 80 | Advanced Problems in Core Mathematics |
| 81 | The C+ + Programming Language |
| 82 | Design Patterns : Elements of Reusable Object-Oriented S |
| 83 | Developing Concepts in Applied Intelligence |
You see the 79 is missing is there a way to push all the indexes after 78 up there by filling the gap of 79. so like after 78 it shows:
| 79 | Advanced Problems in Core Mathematics |
and so forth.

How do I store array of integers in vertica database?

Currently I'm working on vertica database.
I'm facing problem while storing array of integers in one column. I imported data values through csv.
I used the following code to create table.
CREATE TABLE testing_table_1 (email varchar,name varchar,gen int,yob int,ct int,cy int,a varchar(65000),b varchar(65000),c varchar(65000));
I imported data with the following code.
COPY testing_table_1 from '/home/naresh/Desktop/bigfile.csv' parser fcsvparser();
My sample CSV format looks like the below.
ghchnu#tly.org.cn | Donald Garcia | 2 | 2003 | 21947 | 91 | 241,127,225,68,162 | 4,84,63,69,15 | 32,44,15,31
rlmx#jyqewy.biz | Charles Brown | 2 | 2012 | 22218 | 45 | 127,156,186,136,242 | 49,69,14,80,95,1 | 39,36,38,40,20
7th,8th and 9th columns are storing in the format of a string.
But what I want is, I want them to be stored in an array of integers format.
Because of the string format I am unable to perform combination of integers operation using 'IN' query.
I don't to want to use flex-table format in vertica.
Give suggestions other than flex-table format.
Please give me a possible solution for the above problem.
Please correct me if I am doing any mistake.
Thanks in advance,
Vertica is a real, SQL based, relational database.
And those actually do not have rows with a variable number of columns - which is what using arrays would boil down to.
What you need to do is to make, out of this:
[...] 91 | 241,127,225,68,162 | 4,84,63,69,15 | 32,44,15,31
This:
[...] 91 | 1 | 241 | 4 | 32
[...] 91 | 2 | 127 | 84 | 44
[...] 91 | 3 | 225 | 63 | 15
[...] 91 | 4 | 68 | 69 | 31
[...] 91 | 5 | 162 | 15 |(null)
Check out the SPLIT_PART() function; you might want to run it with 1,2,3,4 and finally 5 as the third parameter, and use that very number as the key that you see in the second displayed column above. To get the resulting column as integers, cast the strings that come out of the SPLIT_PART() function to integers.
Good luck -
Marco

Logarithmically increasing execution time for each loop of a ForEach control

First, some background, I’m an SSIS newbie and I’ve just completed my second data-import project.
The package is very simple and consists of a dataflow that imports a tab-separated customer values file of ~30,000 records into an ADO recordset variable which in turn is used to power a ForEach Loop Container that executes a piece of SQL passing in values from each row of the recordset.
The import of the first ~21,000 records took 59 hours to accomplish, prior to it failing! The last ~9,000 took a further 8 hours. Yes, 67 hours in total!
The SQL consists of a check to determine if the record already exists, a call to a procedure to generate a new password, and a final call to another procedure to insert the customer data into our system. The final procedure returns a recordset, but I’m disintersted in the result and so I have just ignored it. I don’t know whether SSIS discards the recordset or not. I am aware that this is the slowest possible way of getting the data into the system, but I did not expect it to be this slow, nor to fail two thirds of the way through, and again whilst processing the last ~9,000.
When I tested the a ~3,000 record subset on my local machine the Execute Package Utility reported that each insert was taking approximately 1 second. A bit of quick math and the suggestion was that the total import would take around 8 hours to run. Seemed like a long time, which I had expected given all that I had read about SSIS and RBAR execution. I figured that the final import would be a bit quicker as the server is considerably more powerful. Although I am accessing the server remotely, but I wouldn’t have expected this to be an issue, as I have performed imports in the past, using bespoke c# console applications that use simple ADO connections and have had nothing run anywhere near as slowly.
Initially the destination table wasn’t optimised for the existence check, and I thought this could be the cause of the slow performance. I added an appropriate index to the table to change the test from a scan to a seek, expecting that this would get rid of the performance issue. Bizarrely it seemed to have no visible effect!
The reason we use the sproc to insert the data into our system is for consistency. It represents the same route that the data takes if it is inserted into our system via our web front-end. The insertion of the data also causes a number of triggers to fire and update various other entities in the database.
What’s been occurring during this import though, and has me scratching my head, is that the execution time for the SQL batch, as reported by the output of the Execute Package Utility has been logarithmically increasing during the run. What starts out as a sub-one second execution time, ends up over the course of the import at greater than 20 seconds, and eventually the import package just simply ground to a complete halt.
I've searched all over the web multiple times, thanks Google, as well as StackOverflow, and haven’t found anything that describes these symptoms.
Hopefully someone out there has some clues.
Thanks
In response to ErikE: (I couldn’t fit this into a comment, so I've added it here.)
Erik. as per your request I ran the profiler over the database whilst running the three thousand item test file through it’s paces.
I wasn’t able to easily figure out how to get SSIS to insert a visible difference into the code that would be visible to the profiler, so I just ran the profiler for the whole run. I know there will be some overhead associated with this, but, theoretically, it should be more or less consistent over the run.
The duration on a per item basis remains pretty constant over the whole run.
Below is cropped output from the trace. In the run that I've done here the first 800 overlapped previously entered data, so the system was effectively doing no work (Yay indexes!). As soon as the index stopped being useful and the system was actually inserting new data, you can see the times jump accordingly, but they don’t seem to change much, if at all between the first and last elements, with the number of reads being the largest item.
------------------------------------------
| Item | CPU | Reads | Writes | Duration |
------------------------------------------
| 0001 | 0 | 29 | 0 | 0 |
| 0002 | 0 | 32 | 0 | 0 |
| 0003 | 0 | 27 | 0 | 0 |
|… |
| 0799 | 0 | 32 | 0 | 0 |
| 0800 | 78 | 4073 | 40 | 124 |
| 0801 | 32 | 2122 | 4 | 54 |
| 0802 | 46 | 2128 | 8 | 174 |
| 0803 | 46 | 2128 | 8 | 174 |
| 0804 | 47 | 2131 | 15 | 242 |
|… |
| 1400 | 16 | 2156 | 1 | 54 |
| 1401 | 16 | 2167 | 3 | 72 |
| 1402 | 16 | 2153 | 4 | 84 |
|… |
| 2997 | 31 | 2193 | 2 | 72 |
| 2998 | 31 | 2195 | 2 | 48 |
| 2999 | 31 | 2184 | 2 | 35 |
| 3000 | 31 | 2180 | 2 | 53 |
------------------------------------------
Overnight I've also put the system through a full re-run of the import with the profiler switched on to see how things feared. It managed to get through 1 third of the import in 15.5 hours on my local machine. I exported the trace data to a SQL table so that I could get some statistics from it. Looking at the data in the trace, the delta between inserts increases by ~1 second per thousand records processed, so by the time it’s reached record 10,000 it’s taking 10 seconds per record to perform the insert. The actual code being executed for each record is below. Don’t bother critiquing the procedure, the SQL was written by the self-taught developer who was originally our receptionist long before anyone with actual developer education was employed by the company. We are well aware that it’s not good. The main thing is that I believe it should execute at a constant rate, and it very obviously doesn’t.
if not exists
(
select 1
from [dbo].[tblSubscriber]
where strSubscriberEmail = #EmailAddress
and ProductId = #ProductId
and strTrialSource = #Source
)
begin
declare #ThePassword varchar(20)
select #ThePassword = [dbo].[DefaultPassword]()
exec [dbo].[MemberLookupTransitionCDS5]
#ProductId
,#EmailAddress
,#ThePassword
,NULL --IP Address
,NULL --BrowserName
,NULL --BrowserVersion
,2 --blnUpdate
,#FirstName --strFirstName
,#Surname --strLastName
,#Source --strTrialSource
,#Comments --strTrialComments
,#Phone --strSubscriberPhone
,#TrialType --intTrialType
,NULL --Redundant MonitorGroupID
,NULL --strTrialFirstPage
,NULL --strTrialRefererUrl
,30 --intTrialSubscriptionDaysLength
,0 --SourceCategoryId
end
GO
Results of determining the difference in time between each execution (cropped for brevity).
----------------------
| Row | Delta (ms) |
----------------------
| 500 | 510 |
| 1000 | 976 |
| 1500 | 1436 |
| 2000 | 1916 |
| 2500 | 2336 |
| 3000 | 2816 |
| 3500 | 3263 |
| 4000 | 3726 |
| 4500 | 4163 |
| 5000 | 4633 |
| 5500 | 5223 |
| 6000 | 5563 |
| 6500 | 6053 |
| 7000 | 6510 |
| 7500 | 6926 |
| 8000 | 7393 |
| 8500 | 7846 |
| 9000 | 8503 |
| 9500 | 8820 |
| 10000 | 9296 |
| 10500 | 9750 |
----------------------
Let's take some steps:
Advice: Isolate if it is a server issue or a client one. Run a trace and see how long the first insert takes compared to the 3000th. Include in the SQL statements some difference on the 1st and 3000th iteration that can be filtered for in the trace so it is not capturing the other events. Try to avoid statement completion--use batch or RPC completion.
Response: The recorded CPU, reads, and duration from your profiler trace are not increasing, but the actual elapsed/effective insert time is.
Advice: Assuming that the above pattern holds true through the 10,000th insert (please advise if different), my best guess is that some blocking is occurring, maybe something like a constraint validation that is doing a nested loop join, which would scale logarithmically with the number of rows in the table just as you are seeing. Would you please do the following:
Provide the full execution plan of the INSERT statement using SET SHOWPLAN_TEXT ON.
Run a trace on the Blocked Process Report event and report on anything interesting.
Read Eliminating Deadlocks Caused by Foreign Keys with Large Transactions and let me know if this might be the cause or if I am barking up the wrong tree.
If none of this makes progress on the problem, simply update your question with any new information and comment here, and I'll continue to do my best to help.

Move pointer in MS Access

I am trying to figure out how i can let MS Access use a field value that is 3 rows lower.
The data is from an external source which retrieves SNMP data every week. I linked a table in Access to the txt output file.
Here is a sample:
| Device | IP Address | Uptime | SNMP Custom |
--------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
Now when i want to get more information of the devices, Cisco descided it was needed to add new lines to the output file so now the linked table looks like:
| Device | IP Address | Uptime | SNMP Custom | SNMP Custom 2
-----------------------------------------------------------------
| Router | 192.168.. | 1 day, 1h | IOS version |
| Technical Support: sometext
| Copyright (c) sometext
| Compiled | ABCD
Now those 4 lines are from 1 device and the ABCD should be in the SNMP Custom 2 field. The exessive rows i can simply delete but i have no idea how to move the ABCD value to the SNMP Custom 2 field.
Can this be done using MS Access(VB?) or classic ASP? Any thoughts are greatly appreciated.
Thanks in advance
If I've understood your OP correctly, try try the following in an Access query:
UPDATE myTable SET [SNMP Custom 2] = [IP Address], [IP Address] = "" WHERE [Device] = "Compiled"