PayPal integration with OpenShift Online -- SSL IPN Issue - openshift

I built an app on OpenShift Online and now I'm trying to integrate with PayPal. I'm running into SSL cURL errors that I don't know how to address. I've looked through SO, OpenShift Online, PayPal and elsewhere but can't get this issue worked through.
Background:
PHP-based app running on OpenShift Online v2
Setup as
https://*******.rhcloud.com/test/test_IPN.php --- so I can use their
*.rhcloud.com wildcard certificate
Using PayPal "Buy Now" button with PayPal Payments Standard, testing in their sandbox
Using IPN sample code found at
https://github.com/paypal/ipn-code-samples/blob/master/paypal_ipn.php
Here is the portion of the code that seems to be at the root of my problem:
// CONFIG: Please download 'cacert.pem' from "http://curl.haxx.se/docs/caextract.html" and set the directory path
// of the certificate as shown below. Ensure the file is readable by the webserver.
// This is mandatory for some environments.
//$cert = __DIR__ . "./cacert.pem";
//curl_setopt($ch, CURLOPT_CAINFO, $cert);
Problem:
[1] using code "as is" (lines 79-80 commented out) throws curl error: "SSL connect error"
[2] using lines 79-80 uncommented out (and cacert.pem placed in same dir as php script) throws curl error: "Problem with the SSL CA cert (path? access rights?)"
It's likely I'm missing something simple here. Any help getting this to work properly on OpenShift Online is greatly appreciated. Thanks!

This line is pretty suspect:
$cert = __DIR__ . "./cacert.pem";
Basically you would end up with something like $cert equaling /home/path./cacert.pem, which I am pretty sure is not what you want, and why you are getting the ssl error, it can't find the certificate.
That could be corrected to:
$cert = __DIR__ . "/cacert.pem";
It also might be better to store the cacert.pem in your $OPENSHIFT_DATA_DIR and reference it as such:
$cert = getenv("OPENSHIFT_DATA_DIR")."cacert.pem";
And make sure that the permissions on the cacert.pem are at least 0644
chmod 0644 $OPENSHIFT_DATA_DIR/cacert.pem

Solution:
Force the use of TLS 1.2
Commenting out lines 79-80 and adding
curl_setopt($ch, CURLOPT_SSLVERSION, 6); // Force TLS 1.2
did the trick for me. Hope this helps someone else.
P.S. The need for TLS 1.2 came from this PayPal article https://www.paypal-knowledge.com/infocenter/index?page=content&widgetview=true&id=FAQ1914&viewlocale=en_US

Related

HTTP request inside Azure CLI GitHub action fails with SSL expired error

We are using the AZ CLI GitHub Action azure/CLI (https://github.com/marketplace/actions/azure-cli-action)
The script that this workflow calls makes an HTTP request to an external API. This cURL call fails with the following:
curl: (60) SSL certificate problem: certificate has expired
More details here: curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
However I can confirm that the same request works locally.
The problem workflow step looks like this:
- name: Run script
uses: azure/CLI#1.0.4
with:
azcliversion: 2.0.72
inlineScript: |
$GITHUB_WORKSPACE/github/scripts/script.sh
Why does cURL think that the SSL cert for the external API domain is expired, when I can make the same call to the same API domain successfully on my own machine?
It seems the problem was that the azcliversion points to a version of the AZ CLI that has outdated certificates.
The problem was solved by removing the azcliversion field altogether, as the default version is latest, as specified in the docs for the action:
azcliversion – Optional Example: 2.0.72, Default: latest
So the step now looks like this:
- name: Run script
uses: azure/CLI#1.0.4
with:
inlineScript: |
$GITHUB_WORKSPACE/github/scripts/script.sh
Probably related to this: https://twitter.com/letsencrypt/status/1443621997288767491
Our cross-signed DST Root CA X3 expired today. If you are hitting an error, check out fixes in our community forum. We're seeing higher than normal renewals, so you may experience a slowdown in getting your certificates.

Basic setup using Symfony 4 messenger, php-enqueue, AWS SQS, AWS SNS

The goal is to be able to send messages using AWS SQS+SNS. This has been a struggle for a few days and I don't know how to make it work.
Symfony 4.2 has a new component, messenger that I wanted to use. It is supposed to work with php-enqueue as a third party transport. I am using that to connect to AWS SQS+SNS.
I can't find any documentation that puts it all together. I see how php-enqueue connects to AWS, but the docs show the config in the code and not in the config yaml or .env files. That is a problem since I want Messenger/enqueue to handle the behind-the-scenes stuff.
I was able to make Symfony Messenger work without php-enqueue for local synchronous messages. But after that... Clearly I am not doing it right. I was hoping someone might have a boilerplate for this configuration.
Here is where I am at. I am just trying to send a message using SQS. I am getting an error:
Error executing "GetQueueUrl" on "https://sqs.us-west-2.amazonaws.com";
AWS HTTP error: Client error: `POST https://sqs.us-west-2.amazonaws.com`
resulted in a `400 Bad Request`
I tried many permutations of keys in the enqueue.yaml file but did not get it right. I used this for help but could not get it to work. https://enqueue.readthedocs.io/en/stable/bundle/config_reference/
->> Edit: I found that you can add the topic and queue names to the DSN. I no longer get the error and a topic is created, but the Queue is not. Now, the message bus is working, but synchronously and locally. No message is sent to AWS.
These are the Composer libs I installed. I am sure that there are too many, but I kept trying to make it work.
"aws/aws-sdk-php": "^3.19",
"enqueue/amqp-lib": "^0.9.8",
"enqueue/enqueue-bundle": "^0.9.8",
"enqueue/messenger-adapter": "^0.2.2",
"enqueue/snsqs": "^0.9.0",
"guzzlehttp/guzzle": "^6.0",
"symfony/amqp-pack": "^1.0",
"symfony/messenger": "4.2.*",
This is my messenger.yaml
framework:
messenger:
transports:
amqp: 'enqueue://default?topic[name]=testQ&queue[name]=testQ'
routing:
# Route your messages to the transports
'App\Message\SmsMessage': amqp
This is enqueue.yaml
enqueue:
default:
transport:
dsn: '%env(resolve:ENQUEUE_DSN)%'
client:~
This is the entry in .env
###> enqueue/enqueue-bundle ###
ENQUEUE_DSN=snsqs::?key={key}&secret={secret}&region=us-west-2
###< enqueue/enqueue-bundle ###
This is the code in a controller to send a message:
public function index(MessageBusInterface $messageBus) {
$message = new SmsMessage('This is so cool');
$messageBus->dispatch($message);
...
}
I had this same issue which i managed to fix.
This is my messenger.yaml config that's working with SQS
transports:
sqs:
dsn: enqueue://default?topic[name]=YOURTOPICNAME&queue[name]=YOURQUEUENAME&receiveTimeout=3
Hopefully this is of use to someone

"Could not get any response" response when using postman with subdomain

I am using postman to test an API I have, all is good when the request does not contain sub-domain, however when I add a sub-domain to URL I am getting this response.
Could not get any response
There was an error connecting to http://subdomain.localhost:port/api/
Why this might have happened:
The server couldn't send a response:Ensure that the backend is working
properly
Self-signed SSL certificates are being blocked:Fix this by turning off
'SSL certificate verification' in Settings > General
Proxy configured incorrectly Ensure that proxy is configured correctly
in Settings > Proxy
Request timeout:Change request timeout in Settings > General
If I copy the same URL from postman and paste it into the browser I get a proper response, is there some kind of configurations I should do to make postman work with sub-domains?
First Go to Settings in Postman:
Off the SSL certificate verification in General Tab:
Off the Global Proxy Configuration and Use System Proxy in Proxy Tab:
Make Request Timeout to 0 (Zero)
Configure Apache:
If the above changes resulted in a 404 response, then continue reading ;-)
Users that host their site locally (like with XAMP and/or WAMP), may be able to visit their virtual sites using https:// prefixed address, but it's a lie, and to really enable SSL (for each virtual-site), configure Apache like:
Open httpd-vhosts.conf file (from Apache's conf/extras directory), in your preferred text editor.
Change the virtual site's settings, into something like:
<VirtualHost *:80 *:443>
ServerName my-site.local
ServerAlias *.my-site.local
DocumentRoot "C:\xampp\htdocs\my-project\public"
SSLEngine on
SSLCertificateFile "path/to/my-generated.cert"
SSLCertificateKeyFile "path/to/my-generated.key"
SetEnv APPLICATION_ENV "development"
<Directory "C:\xampp\htdocs\my-project\public">
Options Indexes FollowSymLinks
AllowOverride All
Order allow, deny
Allow from all
</Directory>
</VirtualHost>
But of course, generate a dummy-SSL-certificate, and change all file paths, like from "path/to/my-generated.cert" into real file addresses.
Finally, test by visiting the local site in the browser, but using http:// (without S) prefixed address; Apache should now give error like:
Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
I had the same issue. It was caused by a newline at the end of the "Authorization" header's value, which I had set manually by copy-pasting the bearer token (which accidentally contained the newline at its end)
If you get a "Could not get any response" message from Postman native apps while sending your request, open Postman Console (View > Show Postman Console), resend the request and check for any error logs in the console.
Thanks to numaanashraf
Hi This issue is resolved for me.
setting ->general -> Requesttimeout in ms = 0
If all above methods doesn't work check your environment variables, And make sure that the following environments are not set. If those are set and not needed by any other application remove them.
HTTP_PROXY
HTTPS_PROXY
Reference link
For me it was the http://localhost instead of https://localhost.
When getting the following error,
you need to do the following.
Step 1:
In Postman, click the wrench icon, go to settings, then go to the Proxy tab.
Step 2:
Create a custom Proxy. This article explains how to create a custom proxy.
After you create the custom Proxy, make sure you turn the Proxy toggle button to off. I put 61095 in for the proxy server and it worked for me.
Step 3 :
Success
I came up with this solution
In postman go to setting --> proxy
And off Global Proxy Configuration
on the Use System Proxy
And go to windows host configure file
'C:\Windows\System32\drivers\etc\hosts'
Open that file in administrator mode
And add the sub domain to hosts file
For me what worked was to add 127.0.0.1 subdomain.localhost to my host file. On OSX that was /etc/hosts. Not sure why that was necessary as I could reach the subdomain from chrome.
In postman go to setting --> proxy
And off Global Proxy Configuration
For me, it was that route that I was calling in my node server wasn't returning anything. Adding
return res.status(200).json({
message: 'success!',
response: 'success!'
});//
to the route I was calling resolved the issue.
You mentioned you are using a CER certificate.
According to the Postman page on certificates.
Choose your client certificate file in the CRT file field. Currently, we only support the CRT format. Support for other formats (like PFX) will come soon.
The name of the extension CER, CRT doesn't make the certificate that type of certificate but, these are the excepted extensions names.
CER is an X.509 certificate in binary form, DER encoded.
CRT is a binary X.509 certificate, encapsulated in text (base-64) encoding.
You can use OpenSSL to change a CER file into a CRT file. I have not had good luck with it but it looks like this.
openssl x509 -inform PEM -in certificate.cer -out certificate.crt
or
openssl x509 -inform DER -in certificate.cer -out certificate.crt
Postman for Linux Version 6.7.1 - Ubuntu 18.04 - linux 4.15.0-43-generic / x64
I had the same problem and by chance I replaced http://localhost with http://127.0.0.1 and everything worked.
My etc/hosts had the proper entries for localhost and https://localhost requests always worked as expected.
I have no clue why changing localhost for http with 127.0.0.1 solved the issue.
None of these solutions works for me. Postman is not sending any request to the server because postman is not finding the host. So, if you modify your /etc/hosts to
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
It works for me.
For me the issue was that the Content-Length was too big. I placed the content of the body in NotePad++ and counted the characters and put that figure in PostMan and then it worked.
I know it does not directly answer why the op's sub-domain was not working but it might help out someone.
In my case it was invisible spaces that postman didn't recognize, the above string of text renders as without spaces in postman.
I disabled SSL certificate Validation and System Proxy even tried on postman chrome extension(which is about to be deprecated), but when I downloaded and tried Insomnia and it gave those red dots in the place where those spaces were, must have gotten there during copy/paste
For anyone who experienced this issue with real domain instead of localhost and couldn't solve it using ANY OF THE ABOVE solutions.
Try changing your Network DNS (WIFI or LAN) to some other DNS. For me, I used Google DNS 8.8.8.8, 8.8.4.4 and it worked!
solution is very simple if you are using asp.net core 2 application . Inside ConfigureServices method inside startup.cs file all this line
services.AddMvc()
.SetCompatibilityVersion(CompatibilityVersion.Version_2_1)
.AddJsonOptions(x => x.SerializerSettings.ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Ignore);
You just need to turn SSL off to send your request.
Proxy and others come with various errors.
My issue was by putting wrong parameters in the header,
the requested parameters was
Authorization: Token <string>
and is was trying
Authorization Token: <string>
After all the above methods like turning OFF SSL certificate verification, turning ON only Use System Proxy and removing HTTP_PROXY and HTTPS_PROXY system environment variables, it worked.
Note: Had to restart the Postman app, since the environment variables were changed.
Unchecking proxy and SSL Certificate Verification didn't work for me.
Unsetting PROXY environment variables did the trick.
export http_proxy=
export ftp_proxy=
export https_proxy=
Change to the directory where Postman is installed and then:
./Postman
In my case, MVC wasn't able to serialize the results (I accidentally used a model instead of DTO). I debugged down to passing a simple string, which worked. Once I fixed the serialization it all came up.
In my case the (corporate) proxy was using a self-signed SSL certificate which Postman disliked. I discovered it by activating
View->Show Postman console
and retrying the request. The console then showed the certificate error. In
Settings->General
I disabled
SSL certificate verification.
The solution for me, as I'm using the deprecated Postman extension for Chrome, to solve this issue I had to:
Call some GET request using the Chrome Browser itself.
Wait for the error page "Your connection is not private" to appear.
Click on ADVANCED and then proceed to [url] (unsafe) link.
After this, requests through the extension itself should work.
In my case it was a misconfigured subnet. Only one of the 2 subnets in the ELB worked.
I figured this out by doing a nslookup and trying to curl the returned IPs directly. Only one worked.
Postman just kept using the misconfigured one.
I had the same issue.
Turned out my timeout was set too low. I changed it to 30ms thinking it was 30sec. I set it back to 0 and it started working again.
I got the same "Could not get any response" issue because of wrong parameter in header. I fixed it by removing parameter HOST out of header.
PS: Unfortunately, I was pushed to install the other software to get this information. It should be great to get this error message from Postman instead of getting general nonsense.
In my case, I forgot to set the value of the variable in the "CURRENT VALUE" field.
I just experienced this error. In my case, the path was TOO LONG. So url like that gave me this error in postman (fake example)
http://127.0.0.1:5000/api/batch/upload_import_deactivate_from_ready_folder
whereas
http://127.0.0.1:5000/api/batch/upld_impt_deac_ready_folder
worked fine.
Hope it helps someone who by accident read that far...

ejabberd contribution mod_apns does not work

I have added mod_apns to my ejabberd server. You can find this module here.
my ejabberd.yml configuration is like this:
mod_apns:
address: "gateway.sandbox.push.apple.com"
port: 2195
certfile: "/Applications/ejabberd-15.10/conf/cert.pem"
keyfile: "/Applications/ejabberd-15.10/conf/key.pem"
password: "myPassword"
the address is sandbox since I am still in development phase. And I have tested my cert.pem and key.pem and they are valid and working.
I send my device token to ejabberd server like this:
<iq type="set" to="myEjabberdServer.com">
<register xmlns="https://apple.com/push">
<token>myDeviceTokenWithoutAnySpace</token>
</register>
</iq>
I can see my device token is saved in apns_users database.
But I still do not get notifications when my user is offline.
Am I doing anything wrong?
Does it work with gateway.sandbox.push.apple.com?
should my device token be without space and only characters?
I appreciate your help..
You have asked for an alternate approach. This alternate approach takes the process of triggering push notifications by the ejabberd server.
1. Use the mod_interact library. This will provide you an ability to transfer your messages to another url.
2. From there on you can use the direct HTTP call for push notifications

Wordpress ==> SSL ==> MySQL is this configuration possible?

I am trying to put SSL encryption between my Wordpress application and its MySQL database, is anyone aware of a solution/tutorial for this? Haven't managed to find anything on Google or the Wordpress codex.
Further to #ticoombs response, and after some digging / testing, I found that by changing the constant defined in wp-config.php (in the root directory) to the following it worked!
define('MYSQL_CLIENT_FLAGS', MYSQLI_CLIENT_SSL);
...note the extra "I" in MYSQLI_CLIENT_SSl.
Symptoms: The symptom I observed was that the call to mysql_connect in /wp-includes/wp-db.php was generating a warning that parameter 8 (i.e. $client_flags) was not an integer.
Version: Vanilla install of 4.8.1, running on php 7.0
Yes. It is possible to connect Wordpress to mysql using SSL. Add define('DB_SSL', true); to your wp-config.php file and take a look at this:
http://wordpress.org/support/topic/wordpress-with-mysql-over-ssl
Just to build on the answer:
File Location: /wordpress/wp-includes/wp-db.php
From:
$client_flags = defined( 'MYSQL_CLIENT_FLAGS' ) ? MYSQL_CLIENT_FLAGS : 0;
To:
$client_flags = defined( 'MYSQL_CLIENT_FLAGS' ) ? MYSQL_CLIENT_FLAGS : MYSQL_CLIENT_SSL;
Currently WP should be able to handle adding, (below) to the wp-config.php. (But in my findings i have not been able to get it to work.
define('MYSQL_CLIENT_FLAGS', MYSQL_CLIENT_SSl);
I wrote a good blog post on the matter.
Source