I have looked everywhere for an example of a QSealC signed message, and I could not find any info.
I need to verify the signature of a QsealC signed payload AND I need to sign the responding payload but I understand that the payload is all in json,
Are there examples out there of QSealC signed payloads?
thanks
You will do both the signing and validation as detailed by IETF's draft-cavage-http-signatures, where you should pay special attention to section 4.1.1 for constructing and section 2.5 for verifying the Signature header.
This draft is referenced by both Berlin Group's XS2A NextGenPSD2 Framework Implementation Guidelines and Stet (France). However, note that it's normal that each unique implementation imposes additional requirements on the HTTP Message Signing standard, e.g. by requiring specific headers to be signed or using a specially formatted keyId. I am not sure whether other standardizations such as Open Banking (UK) reference it.
Take note that you do not need actual QsealC PSD2 certificates to begin your implementation work of the neither the signing nor validation process, as you can create your own self-issued certificates, e.g. using OpenSSL, by adding the OID's found in the ASN.1 profile described in ETSI TS 119 495.
However, I strongly recommend you find a QTSP in your region and order certificates both for development and testing, and for use in production when the time comes.
I won't go into details on the actual process of creating the signature itself, as it's very well detailed in draft-cavage-http-signatures, but consider the following example;
You're requesting GET https://api.bank.eu/v1/accounts, and after processing your outgoing request you end up with the following signing string;
date: Sun, 12 May 2019 17:03:04 GMT
x-request-id: 69df69c1-76d0-4590-8f28-50449a21d0d8
psu-id: 289da2e6-5a01-430d-8075-8f7af71f6d2b
tpp-redirect-uri: https://httpbin.org/get
The resulting Signature could look something like this;
keyId=\"SN=D9EA5432EA92D254,CA=CN=Buypass Class 3 CA 3,O=Buypass AS-983163327,C=NO\",
algorithm=\"rsa-sha256\",
headers=\"date x-request-id psu-id tpp-redirect-uri\",
signature=\"base64(rsa-sha256(signing_string))\"
The above signature adheres to Berlin Group requirements as detailed in Section 12.2 in their implementation guidelines (per. v1.3), except some linebreaks added for readability, which in short are ;
the keyId must be formatted as SN={serial},CA={issuer}, but note that it seems to be up to the ASPSP to decide how the serial and issuer is formatted. However, most are likely to require the serial to be in uppercase hexadecimal representation and issuer formatting in conformance with RFC 2253 or RFC 4514.
The algorithm used must be either rsa-sha256 or rsa-sha512
The following headers must be part of the signing string if present in the request; date, digest, x-request-id, psu-id, psu-corporate-id, tpp-redirect-uri
The signature must be base-64 encoded
As developers have just begun to adopt this way of signing messages, you'll likely have you implement this yourself - but it's not too difficult if you just carefully read the above mentioned draft.
However, vendors have begun supporting the scheme, e.g. Apache CXF currently supports both signing and validation from v3.3.0 in their cxf-rt-rs-security-http-signature module as mentioned in the Apache CXF 3.3 Migration Guide. Surely, others will follow.
Validating the actual signature is easy enough, but validating the actual QsealC PSD2 certificate is a bit more cumbersome, but you'll likely have to integrate with EU's List of Trusted Lists to retrieve root- and intermediate certificates used to issued these certificates, and form a chain of trust together with e.g. Java's cacerts and Microsoft Trusted Root Certificate Program. I personally have good experiences using difi-certvalidator (Java) for the actual validation process, as it proved very easy to extend to our needs, but there are surely many other good tools and libraries out there.
You'll also need to pay special attention the certificate's organizationIdentifier (OID: 2.5.4.97) and qcStatements (OID: 1.3.6.1.5.5.7.1.3). You should check the certificate's organizationIdentifier against the Preta directory, as there might be instances where a TPP's authorization is revoked by it's NCA, but a CRL revocation hasn't yet been published by it's QTSP.
DISCLAIMER: When it comes to the whole QsealC PSD2 certificate signing and validation process, I find both Berlin Group and EBA to be very diffuse, leaving several aspects open to interpretation.
This is a rough explanation, but hopefully it give you enough to get started.
Related
The User-Agent Reduction origin trial is valid from Chrome version 95 to 101 according to the official documentation, but looking at the token acquisition screen, it seems to be valid up to version 111. I am currently on version 109. Is this one excluded?
https://developer.chrome.com/en/blog/user-agent-reduction-origin-trial/
https://developer.chrome.com/origintrials/#/view_trial/-7123568710593282047
Also, this one is intended to test in a situation where the user agent string and javascript api have been completely removed or changed. Is there another way to test before they are completely removed?
We would appreciate it if you could enlighten us.
I have added the necessary settings to the response headers, referring to the official documentation, but it does not work correctly.
https://developer.chrome.com/en/blog/user-agent-reduction-origin-trial/
I want to check if a user has a valid license for a Windows Store application (desktop bridge). At first the StoreLicense.IsActive[1] property looked promising but the docs state:
This property is reserved for future use, and it is not intended to be used in the current release. Currently, it always returns true.
Interestingly the demo code provided by Microsoft [2] also uses this function, although I can confirm that it always returns true.
What is the proper way to check for a valid license?
Regards,
[1] https://learn.microsoft.com/de-ch/uwp/api/windows.services.store.storelicense.isactive
[2] https://learn.microsoft.com/en-us/windows/uwp/monetize/implement-a-trial-version-of-your-app
It seems like you want to check whether the user currently has a valid license to use the app, in this case, according to this section of the document Get license info for apps and add-ons:
To get license info for the current app, use the GetAppLicenseAsync method. This is an asynchronous method that returns a StoreAppLicense object that provides license info for the app, including properties that indicate whether the user currently has a valid license to use the app (IsActive) and whether the license is for a trial version (IsTrial).
So that from this document you could use StoreAppLicense.IsActive property to check for valid App licence, not StoreLicense.IsActive currently.
More details you could also reference the official sample.
I am new to EMV development. My question is regarding Tag 91 (Issuer Authetication Data) which is sent by Issuer in EMV response. In my case, when tag 91 is missing in response packet then chip card decides to decline the transaction even if issuer has approved transaction online. So I am wondering whether Tag 91 is a mandatory tag which needs to be sent by issuer each time it approves a transaction online and what is industry wide understanding about it. Please let me know thoughts on it.
Also, In my case, Application Interchange Profile Byte 1, Bit 3 = 1 which means external authentication is required.
Are you working for Card Application or Terminal Application ?
Issuer authentication always have to be performed unless you are doing a partial chip implementation. I am sure you know that it is an additional level of security that ensures that the response came from the correct issuer.
When AIP B1B3 is on, it means card will expect tag 91.
In some cases it is even default. eg. D-PAS( Diners/Discover) AIP B1B3 is off, since it does not support External Authenticate. It is verified during second Gen AC. In such cases if the issuer wants card to not decline the transaction when ARPC not present, in the ACO( Application configuration Option), it is explicitly mentioned about partial chip implementation.
Check each payment scheme card and terminal specifications manual careful before you implement, as any loop holes in implementation may help a fraudster skip the security you wish to provide.Thumb rule, if you get ARPC from issuer, always sent it to card. Let the card decide.
From Card Application side I can add that "Application Interchange Profile Byte 1, Bit 3 = 1 means external authentication is supported, but not required. What does it mean ? Card may support Issuer Authentication, but may not required it for online transaction (issuer authentication can be mandatory or optional in card internal parameters). I.e. for optional it would be great to do auth, but if not - no problem, do online. And if Issuer have not sent tag 91 card may aprove transaction.
I am planning to add in the message carbon function(when a user is logged into multiple device and send a message or/receive a message, it will sync between all devices) for a chat server, i am running on Ejabberd and using strophe.js...
I am wondering if there are plugin written for Ejabberd that i can install and also for strophe.js???
I looked over https://github.com/processone/ejabberd-contrib and the github for strophe.js
None of them seem to have plugins for message carbon. Wondering if anyone has this implemented before??
I have read that if it doesnt, i should treat it as a groupchat??? I am not sure why that would work??? And not exactly sure if thats good for the resources, and what if it scales up, would that have impact on the overall structure.
If it is treated as a group chat, then i assume each resource/session would be treated as a different user? Then when a message is sent into that group, all of those other session/users are updated, so even there are 2 users??
ejabberd supports message carbons as default in latest version.
This feature is unrelated to groupchat and cannot and should not be treated similarly.
If you read XEP-0280 Message Carbons you should see that sending a packet similar to the following is enough to enable it:
<iq id='enable1' type='set'>
<enable xmlns='urn:xmpp:carbons:2'/>
</iq>
You may find also valuable information in XMPP Academy video #2 at 27m30s.
I've got a self-signed certificate for multiple domains (let's say my.foo.bar.com & yours.foo.bar.com) that I've imported using Keychain Access but Chrome will still not accept it, prompting me for verification at the beginning of each browsing session per domain.
The Certificate was generated using the x509v3 subject alternative name extension to validate multiple domains. If I navigate to the site before I import the certificate, I get a different warning message than after importing. Attached below is an image of the two errors (with the top being the error before imported)
Is there any way to accept a self-signed multi-domain certificate? I only get warnings in Chrome, btw. FF and Safari work great (except those browsers suck ;) )
UPDATE: I tried generating the cert both with the openssl cli and the xca GUI
The problem is that you're trying to use too broad a wildcard (* or *.com).
The specifications (RFC 6125 and RFC 2818 Section 3.1) talk about "left-most" labels, which implies there should be more than one label:
1. The client SHOULD NOT attempt to match a presented identifier in
which the wildcard character comprises a label other than the
left-most label (e.g., do not match bar.*.example.net).
2. If the wildcard character is the only character of the left-most
label in the presented identifier, the client SHOULD NOT compare
against anything but the left-most label of the reference
identifier (e.g., *.example.com would match foo.example.com but
not bar.foo.example.com or example.com).
I'm not sure whether there's a specification to say how many minimum labels there should be, but the Chromium code indicates that there must be at least 2 dots:
We required at least 3 components (i.e. 2 dots) as a basic protection
against too-broad wild-carding.
This is indeed to prevent too broad cases like *.com. This may seem inconvenient, but CAs make mistakes once in a while, and having a measure to prevent a potential rogue cert issued to *.com to work isn't necessarily a bad thing.
If I remember correctly, some implementations go further than this and have a list domains that would be too broad for second-level domains too (e.g. .co.uk).
Regarding your second example: "CN:bar.com, SANs: DNS:my.foo.bar.com, DNS:yours.foo.bar.com". This certificate should be valid for my.foo.bar.com and yours.foo.bar.com but not bar.com. The CN is only a fallback solution when no SANs are present. If there are any SANs, the CN should be ignored (although some implementations will be more tolerant).