I have deployed my node js project on Heroku but I am not able to point my domain (purchased from ionos.ca) to the Heroku dns target. I have made two domains in heroku dashboard:
*.mysite.com, DNS Target: aqueous-jay-p8wmra8eyzlv3gzckdhj99je.herokudns.com
www.mysite.com, DNS Target:
experimental-turnip-ha25x6iwdwmb4xzxtsdrhj3k.herokudns.com
Then in my ionos.ca domain portal, I changed the CNAME to
aqueous-jay-p8wmra8eyzlv3gzckdhj99je.herokudns.com
But whenever I visit www.mysite.com I get an error saying
This site can’t provide a secure connection
www.mysite.com sent an
invalid response.
Visiting mysite.com gives me this error:
This site can’t be reached
mysite.com’s server IP address could not be found.
Any idea how I could fix this? I have been trying to make it work since last 1 hour :(
Something is wrong with your SSL/TLS setup. Fiddler4/Wireshark is showing Internal Error (80) I found some references that may help here: https://stackoverflow.com/questions/43436412/openssl-connection-alert-internal-error If you are using NGINX then post your config I can help with that.
Frame 138: 61 bytes on wire (488 bits), 61 bytes captured (488 bits) on interface 0
Ethernet II, Src: Fortinet_d4:fd:97 (70:4c:a5:d4:fd:97), Dst: Dell_b3:a3:f6 (b8:85:84:b3:a3:f6)
Internet Protocol Version 4, Src: 52.73.16.193, Dst: 192.168.1.40
Transmission Control Protocol, Src Port: 443, Dst Port: 63037, Seq: 1, Ack: 221, Len: 7
Transport Layer Security
TLSv1.2 Record Layer: Alert (Level: Fatal, Description: Internal Error)
Content Type: Alert (21)
Version: TLS 1.2 (0x0303)
Length: 2
Alert Message
Level: Fatal (2)
Description: Internal Error (80)
Related
I connected the MHz19 sensor to the D1 mini and want to flash it via ESP-Home.
I followed the following guide:
https://esphome.io/components/sensor/mhz19.html
I used the code
esphome:
name: co2-sensor
esp8266:
board: esp01_1m
# Enable logging
#logger:
# Enable Home Assistant API
api:
ota:
password: "xxxxxx"
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "Co2-Sensor Fallback Hotspot"
password: "xxxxx"
captive_portal:
uart:
rx_pin: GPIO3
tx_pin: GPIO1
baud_rate: 9600
sensor:
- platform: mhz19
co2:
name: "CO2 Value"
temperature:
name: "MH-Z19 Temperature"
update_interval: 60s
automatic_baseline_calibration: false
But cannot flash, I get the following error
======================== [SUCCESS] Took 305.85 seconds ========================
INFO Successfully compiled program.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v3.2
Serial port /dev/ttyUSB0
Connecting......................................
A fatal error occurred: Failed to connect to ESP8266: No serial data received.
For troubleshooting steps visit: https://github.com/espressif/esptool#troubleshooting
I can however flash it and it comes online if I disconnect the sensor, of course it publishes no data. So I assume it's something to do with UART. I also tried disabling the logging, which did nothing.
It seems the UART pins used by default for debug logging, even though I thought I had disabled the logging option. I used pin 4 and 5 and it worked.
https://esphome.io/components/uart.html#uart
Note that the value I got was 5000 ppm at first, when it was plugged into the Pi on which I'm running HA. When connecting to another PSU I got 'normal' looking values. I assume it simply did not get enough power from the PI.
In ejabberd 18.01-2, installed in lxc container Ubuntu 18.04 Bionic LTS using apt, I'm trying to setup mod_http_upload.
In the section listen, I have
listen:
-
port: 5444
module: ejabberd_http
tls: true
request_handlers:
"/upload": mod_http_upload
In the configuration file, commented port was 5444, however, in the current documentation, it is 5443, so I am not sure which one is right.
In the modules section, I have
modules:
mod_http_upload:
host: "upload.ejabberd.forumanalogue.fr"
max_size: infinity
thumbnail: true
put_url: "https://ejabberd.forumanalogue.fr:5444/upload"
docroot: "/ejabberd/upload"
When I start the service, I can see an odd message in the logs
2019-11-11 21:02:35.287 [warning] <0.367.0>#ejabberd_pkix:handle_call:255 No certificate found matching 'upload.ejabberd.forumanalogue.fr': strictly configured clients or servers will reject connections with this host; obtain a certificate for this (sub)domain from any trusted CA such as Let's Encrypt (www.letsencrypt.org)
It is strange because I have a signed wildcard certificate.
certfiles:
- "/etc/letsencrypt/live/forumanalogue.fr/*.pem"
I can see the service with my client (Gajim) but when I try to send a file to another local account, I receive an error Access denied by service policy, see the complete log:
<iq xml:lang='en' to='foo#forumanalogue.fr/gajim.HCLJ4BZI' from='upload.ejabberd.forumanalogue.fr' type='error' id='1dd35274-90e9-4b3b-9608-0fab59afe34e'>
<request xmlns='urn:xmpp:http:upload'>
<filename>a.out</filename>
<size>27232</size>
<content-type>application/octet-stream</content-type>
</request>
<error code='403' type='auth'>
<forbidden xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'/>
<text xml:lang='en' xmlns='urn:ietf:params:xml:ns:xmpp-stanzas'>Access denied by service policy</text>
</error>
</iq>
I had to enable debug logging in order to see something. It is quite verbose, but I think that the relevant part, which is non redundant with the client message, is
2019-11-11 20:53:08.329 [debug] <0.501.0>#mod_http_upload:process_slot_request:544 Denying HTTP upload slot request from foo#forumanalogue.fr/gajim.HCLJ4BZI
Thank you for your help.
I tried with ejabberd 18.01, a configuration similar to yours, and it works for me.
Looking at the source code, that "process_slot_request:544 " error means that the account attempting to use the upload feature is not allowed by the "local" Access rule in the vhost it sended it to. Probably it's a remote account. Remote to that upload service. In other words, the service upload.whatever can only be used by accounts like user12#whatever.
In your case, you are attempting to use upload.ejabberd.forumanalogue.fr from account foo#forumanalogue.fr, which is not local to that upload service.
Several ideas, I hope one of them suits your specific setup:
A) don't mess with vhosts. If it's forumanalogue.fr, keep it that everywhere
B) use #HOST# in host and put_url options
C) Or if you really want to mess with hosts, then add Access rights so accounts in that vhost are considered "local" to the upload service.
I got this error while trying to configure level 2 authentication using idm,pep-proxy and pdp.
I am using latest version of authzforce,idm,pep-proxy but this error still persists.
config.azf = {
enabled: true,
protocol: 'http',
host: 'localhost',
port: 8080,
custom_policy: undefined // use undefined to default policy checks (HTTP verb + path).
};
part of config that is relevant.
As I understand idm connected with authzforce should auto create domains, but for some reason that is not case.
I have tried with different versions, read similar issues on stack but problem still persist.Any advice or maybe point what i am doing wrong would be really helpful.
Thanks
I have a local SMTP email server I use for testing purposes running on my machine. It listens for SMTP on port 25. I am able to send and receive emails to it using a regular email client.
When I build a Node-RED flow that contains an e-mail output node and configure its properties with:
to: <email address>
server: localhost
port: 25
and submit a flow, I get the error:
25 Feb 16:43:24 - [error] [e-mail:<email address>] Error: 101057795:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:openssl\ssl\s23_clnt.c:794:
I am at a loss on how to proceed. Looking at the messages, it almost appears that there is some form of SSL negotiation/test at play here. Switching on trace on my SMTP server, I find the following logs each time I try and run a flow:
"TCPIP" 10708 "2016-02-25 16:43:08.294" "TCP - 127.0.0.1 connected to 127.0.0.1:25."
"DEBUG" 10708 "2016-02-25 16:43:08.298" "Creating session 22"
"SMTPD" 10708 22 "2016-02-25 16:43:08.298" "127.0.0.1" "SENT: 220 WIN7-X64 ESMTP"
"DEBUG" 9772 "2016-02-25 16:43:08.299" "Ending session 22"
It appears that the Node-RED node is sending a connection request, getting back the SMTP 220 response and then failing immediately after that.
I came across the same problem and have a nasty hack that will enable mail to go via my local exchange server's plain SMTP, with no auth.
Edit the .../61-email.js file and change it thusly:
var smtpTransport = nodemailer.createTransport({
host: node.outserver,
port: node.outport,
secure: false,
ignoreTLS: true //,
// auth: {
// user: node.userid,
// pass: node.password
// }
});
I see Dave has replied to the github issue but just to close the loop on this question.
At this time (Feb 2016) the node assumes SSL is always available and enabled, at some point we need to go back to the email node and find a simple way to expose a lot more of the nodemailer options to allow connections to a wider range of email providers both public and private.
I am recording a https session of a JSF based web app on JMeter and it's not working.
Target application is hosted on: AWS
JMeter version: 2.9 r1437961
Browser: Chrome version 29.0.1547.65
Java: java version "1.6.0_27"
OpenJDK Runtime Environment (IcedTea6 1.12.5) (6b27-1.12.5-0ubuntu0.12.04.1)
OpenJDK Server VM (build 20.0-b12, mixed mode)
OS: Ubuntu 12.04
Proxy server config:
Port: 8084
Target Controller: Test Plan > Thread Group
Capture HTTP headers is checked.
HTTP Sample settings:
Type: not selected. Follow Redirects and Use KeepAlive checked.
URL patterns to exclude:
1. Added Suggested Excludes
2. .*\.jsf
Exceptions that are getting thrown (from JMeter.log):
ERROR - jmeter.protocol.http.proxy.Proxy: java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
ERROR - jmeter.protocol.http.proxy.Proxy: Problem with SSL certificate? Ensure browser is set to accept the JMeter proxy cert: Connection closed by remote host java.net.SocketException: Connection closed by remote host
at sun.security.ssl.SSLSocketImpl.checkWrite(SSLSocketImpl.java:1377)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:62)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.jmeter.protocol.http.proxy.Proxy.writeToClient(Proxy.java:404)
at org.apache.jmeter.protocol.http.proxy.Proxy.run(Proxy.java:218)
The steps I am following are:
1. Set proxy server pointing to 8084.
2. Change proxy settings from chrome:
Set https proxy to 8084.
3. Disabled all chrome extensions and chrome account.
4. Started jmeter proxy server and hit https://url/login
5. Certificate confirmation page appears on browser. Meanwhile, jmeter.log shows:
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Creating Daemon Socket on port: 8084
2013/09/11 13:16:30 INFO - jmeter.protocol.http.proxy.Daemon: Proxy up and running!
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Proxy will remove the headers: If-Modified-Since,If-None-Match,Host
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (clients4.google.com)
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: Opened Keystore file: /home/abhijeet/Automation_Dev/LoadAutomation/Jmeter/apache-jmeter-2.9/bin/proxyserver.jks
2013/09/11 13:22:39 INFO - jmeter.protocol.http.proxy.Proxy: KeyStore for SSL loaded OK and put host in map (translate.googleapis.com)
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHCAbstractImpl: Local host = abhijeet-desktop
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: HTTP request retry count = 1
2013/09/11 13:22:40 INFO - jmeter.protocol.http.sampler.HTTPHC4Impl: Setting up HTTPS TrustAll scheme
2013/09/11 13:22:40 INFO - jmeter.protocol.http.proxy.FormCharSetFinder: Using htmlparser version: 2.0 (Release Build Sep 17, 2006)<br>
6. Thread group starts showing unknown requests to these domains:
1. translate.googleapis.com
2. clients4.google.com
3. www.google.co.in
4. www.google.com
5. ssl.gstatic.com
6. safebrowsing.google.com
7. alt1-safebrowsing.google.com
8. clients4.google.com
9. www.gstatic.com
.
.
n all other requests going to the target application.
(For every request the above exceptions are thrown)
I believe, the google domain requests above are getting recorded because chrome is dynamically searching the keywords on google, while I am typing the url string in the address bar. But I don't want these requests to get recorded in the Thread Group.
Also, I tried the solutions from these pages but they didn't work for me:
Link 1
Link 2
Link 3
I don't understand, why is JMeter not able to use the fake certificate that it already has. I checked the SSL settings in chrome and I could not find any JMeter certificates. Need help!!
To do it in chrome/IE we have to place the certificate into 'Trusted Root Certificates Store'
Double click the certificate created
Certificate Import Wizard opens
Click Next
Select Second radio button (Place All Certificates in the following store)
Click Browse and select 'Trusted Root Certificates Authorities'. Click Next
Click Finish
Check your certificate installed in Chrome Settings (under Http/SSL) - Manage certificates.. (Trusted Root Certificates Authorities Tab)
This should at cure the exceptions thrown as your screenshot shows.
I have the same problem and solve it to trust the certificate. Just like you when i look at the
Options > Advanced > Certificates > View Certificates ==> Authorities
and couldn't see a name ApacheJMeterRootCertificate.crt or a related name, but i realize that there is a name something like
_DO NOT INSTALL unless this is your certificate
I click this object and 'Edit_Trust' both item under this object. I share my screenshot. I hope this can be help you and others.
I use Firefox. At chrome there should be similar way to edit the certificate.
jmeter 2.12 has good support for HTTPS. Under the WorkBench, just select Add -> Non-Test Elements -> HTTP(S) Test Script Recorder. This version worked first time for me.
Latest versions of Google Chrome made difficult to bypass security settings to avoid security Threats as Phishing or Man-in-the-middle attacks.
I have successfully configured Google Chrome (v.54.0) to allow JMeter Self-Signed Certificate for HTTP(S) Recording.
Here the instructions (on Windows):
Open MMC console (SUPER + R, Type mmc, Press Enter)
Select File Add/Remove Snap-in
Select Certificates Snap-in for Current User
Select Trusted Root Certification Authorities >> Certificates
Right-click over Certificates folder and select All Tasks >> Import...
Import JMeter Self-Signed certificate using the wizard keeping the default options.
Once installed, right-click over JMeter Self-Signed certificate and select Properties
On General tab, make sure Enable for all purposes option is selected
On Cross-Certificates, include the URL of the application you want to record (make sure you enter the full url, e.g. https://www.live.com)
Close all windows.
Done. You should now be able to reach the destination bypassing Chrome security alert and start recording.