GCE instance - OPENVPN - not resolving any address - google-compute-engine

I don't have internet access when i'm connected to my vpn server.
I have tried manually to install debian on my home virtual machine and runs without problem, so is not the vpn server problem.
I want GCE debian instance to get connect to openvpn and have internet access from that ip address
Let me know what i'm missing?
here is my .ovpn config
remote xxxxxxx 7777 tcp
verb 4
client
nobind
dev tun
cipher AES-128-CBC
key-direction 1
redirect-gateway def1
tls-client
remote-cert-tls server
# uncomment below lines for use with linux
script-security 2
# if you use resolved
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf
# if you use systemd-resolved first install openvpn-systemd-resolved package
#up /etc/openvpn/update-systemd-resolved
#down /etc/openvpn/update-systemd-resolved
<cert>
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
e2:7e:b0:e5:dd:37:33:6c:36:49:76:2f:ec:0e:73:e7
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=ca
Validity
Not Before: Nov 18 14:27:52 2021 GMT
Not After : Feb 21 14:27:52 2024 GMT
Subject: CN=gitlab
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
RSA Public-Key: (2048 bit)
Modulus:
00:d3:51:b2:....
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Subject Key Identifier:
1B:56:09:AE:B4:5D:26:18:....
X509v3 Authority Key Identifier:
keyid:92:76:43:....
DirName:/CN=ca
serial:02:D6:....
X509v3 Extended Key Usage:
TLS Web Client Authentication
X509v3 Key Usage:
Digital Signature
Signature Algorithm: sha256WithRSAEncryption
52:32:ca:......
-----BEGIN CERTIFICATE-----
cert here
-----END CERTIFICATE-----
</cert>
<key>
-----BEGIN PRIVATE KEY-----
key here
-----END PRIVATE KEY-----
</key>
<ca>
-----BEGIN CERTIFICATE-----
cert here
-----END CERTIFICATE-----
</ca>
<tls-auth>
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
key here
-----END OpenVPN Static key V1-----
</tls-auth>
It's get connected to vpn successfully and nothing happens no access of the internet...
UPDATE :
I have edit and enabled net.ipv4.ip_forward, but that doesn't solve the issue.
Server config file :
# server 172.16.100.0 255.255.255.0
verb 3
tls-server
ca /etc/openvpn/easyrsa/pki/ca.crt
key /etc/openvpn/easyrsa/pki/private/server.key
cert /etc/openvpn/easyrsa/pki/issued/server.crt
dh /etc/openvpn/easyrsa/pki/dh.pem
crl-verify /etc/openvpn/easyrsa/pki/crl.pem
tls-auth /etc/openvpn/easyrsa/pki/ta.key
key-direction 0
cipher AES-128-CBC
#management 127.0.0.1 8989
keepalive 10 60
persist-key
persist-tun
topology subnet
#proto tcp
#port 1194
#dev tun0
status /tmp/openvpn-status.log
user nobody
group nogroup
push "topology subnet"
push "route-metric 9999"
push "dhcp-option DNS 1.1.1.1"

I would try making this modification to push everything from the server. Topology is usually set on the server side, and not on the client side. And you want to push the redirect gateway from the server, not from the client. I added back the server subnet so that we knew the source ip addresses for masquerade / nat.
# push "topology subnet"
# push "route-metric 9999"
server 172.16.100.0 255.255.255.0
topology subnet
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 1.1.1.1"
Lastly, you'll want to make sure you have masquerade turned on in iptables so that your traffic is natted on the way out of your openvpn server. Here is a link describing the process.
iptables -t nat -A POSTROUTING -s 172.16.100.0/24 -o eth0 -j MASQUERADE
You may have a different ethernet interface name, but you can find the correct name using ifconfig.

Related

Openshift Origin registry: how to make it accessible?

We are setting up a test cloud Openshift Origin which we created using the openshift ansible playbook. We are following the documentation at: https://docs.openshift.com/container-platform/latest/install_config/install/advanced_install.html
We have not done anything special concerning the openshift registry or router.
We are pretty new to this topic and we tried since few tags to bring the openshift registry accessible....
We have 3 hosts:
master (unschedulable)
node-1 which is set to the region 'infra' and has the registry and router services
node-2 (other region).
Here the services running on the default project:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.78.66 <none> 5000/TCP 3h
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 3h
registry-console 172.30.190.63 <none> 9000/TCP 3h
router 172.30.197.135 <none> 80/TCP,443/TCP,1936/TCP 3h
When we SSH directly on the node-1 where the registry and router are running, we can access the registry without problem and we can push some images. Exactly what is here described: docs.openshift.org/latest/install_config/registry/accessing_registry.html
Now we cannot access the registry for other hosts (master or node-2) and we really do not understand how we can make the registry accessible.... We have of course read: docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html#access-insecure-registry-by-exposing-route
We have used this command:
oc expose service docker-registry --hostname=<hostname> -n default
The documentation says: You must be able to resolve this name externally via DNS to the router’s IP address.
As the router does not have any EXTERNAL-IP address attached to it, we do not understand how to reach it.
Is there any oc or oadm command for exposing the router through an external-ip address?
Thanks a lot in advance
Emmanuel
Based on your stated configuration I would expect the path to your UI/API for Openshift (openshift.yourdomain.com) to be routed to the same IP as your node-1, because that is where you are running the router.
If that is the case then you would point the hostname you are passing via the command in DNS to the same IP, or as a CNAME to that host.
oc expose service docker-registry --hostname=<hostname> -n default
In a larger setup with dedicated set of load balancer (lb) nodes you might have a specific A record for the set. You could then have the hostname be a CNAME to that record.

How to insert my RSA private key into GCE VM through Google Deployment Manager?

Does anyone know how to pass a RSA private key through the deployment configuration file below to a Google Compute Engine (GCE) virtual machine? The reason I am doing this is because the software installed in my GCE virtual machine needs to SSH into some other virtual machines in which the corresponding RSA public key has already been installed.
resources:
- name: gml
type: gml.py
properties:
zones:
- us-east1-b
- europe-west1-b
- asia-east1-a
machineType: n1-standard-2
nodesPerZone: 5
diskSize: 10
privKey: |
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAmjMePciwIBJYSWTE9CTF0o1xQt3sbIGrJO3HKTseR4Bs+zqI
HehgkWMCnXMnJeE+7YpF4JI1gXEIhaGH+9GkN3/Zxu8VMC5zHwXChg3b/Ew1Ws7c
PjIi+YKpyRg70v623UqGBMb58hPTCEAF91Q00zT95dxGUWBus9rovpZdgT0flp/8
X134qGp3bzvgZ1P0BGW6ZcLkmtPFgv6E/jDmV36eNzOEMmyhq7HvEcDaMMyT5PuD
i2HAGNE1u8rgFuIVgipN5SEZ5GcFGZF9boMXObr7JkeCvgt7masTUNVw2Ii5JxNB
GVFzpLNVxHeo7YBqhz5/8aaLdNY58LIbioRm3wIDAQABAoIBAHYxnIqLG8VZiman
YPgqf5+GXzx70s7RDZf+0lvePrVb0S04jkEub2bBV63MKEO2xX9aL3mVWIHhXEDh
sdPpu0/3JbyAYeNOl1s+FP6f/PEEkRkL2nGqCHjsGKxVcPWn3A7/In7i7Y8KdwWp
.....
.....
-----END RSA PRIVATE KEY-----
I think the only way to place a file would be with a startup script. Something like
metadata:
- key: startup-script
value: |
#!/usr/bin/env bash
# create file if not exist
...
or
metadata:
- key: startup-script-url
value: gs://my-secret-bucket/set-key.sh
Personally, I prefer the latter. If you need to update the script for some reason it will not require updating the deployment, and the key would not be visible cloud console.
In either case you should gauge for yourself where you want your private key to be visible.

ERR_INSECURE_RESPONSE + subdomains

I need help to understand if Chrome behaviour is OK or it's due to my mistake. I'm using selfsigned-certificates and subdomains.
From https://preproduser.svtools.tp.XXX.it/#!/login/ there is an Ajax query to another subdomain which returns error:
OPTIONS https://preprodauth.svtools.tp.XXX.it/v1/authenticate net::ERR_INSECURE_RESPONSE
Both app are served by a Nging reverse proxy with this certificate:
# openssl x509 -noout -certopt no_sigdump,no_pubkey -text -in selfsigned.svtools.tp.XXX.it.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 15659850292680964857 (0xd952f6df2b0bbaf9)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=IT, ST=Italia, L=YYY, O=XXX, CN=*.svtools.tp.XXX.it
Validity
Not Before: Apr 2 18:36:50 2017 GMT
Not After : Mar 31 18:36:50 2027 GMT
Subject: C=IT, ST=Italia, L=YYY, O=XXX, CN=*.svtools.tp.XXX.it
X509v3 extensions:
X509v3 Basic Constraints:
CA:FALSE
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Subject Alternative Name:
DNS:svtools.tp.XXX.it, DNS:preproduser.svtools.tp.XXX.it, DNS:preprodauth.svtools.tp.XXX.it
I see on Chrome that the certificate has SAN and Chrome tells me ERR_CERT_AUTHORITY_INVALID (expected behavior).
Question: after accepting the security certificate issue before opening https://preproduser.svtools.tp.XXX.it/ I expect no further problems for https://preprodauth.svtools.tp.XXX.it/. Chrome still complains the security issue. Can I avoid this?
Riccardo
While you can override the warning about ERR_CERT_AUTHORITY_INVALID it will only override the warning for exactly this certificate at exactly this hostname and not automatically trust this certificate for any other hosts.
Just imagine that this would be otherwise: in this case some man in the middle attacker could use a certificate to intercept the connection to some unimportant host where many would just override the warning because there are no sensitive data. Once this is done the attacker could use the same certificate (now trusted by the browser) to intercept connections to important hosts. That's why any certificate exemption is only for the specific hostname where the certificate was explicitly exempted for.

Can't access WildFly over HTTPS with browsers but can with OpenSSL client

I've deployed Keycloak on WildFly 10 via Docker. SSL support was enabled via cli. Final standalone.xml has:
<security-realm name="UndertowRealm">
<server-identities>
<ssl>
<keystore path="keycloak.jks" relative-to="jboss.server.config.dir" keystore-password="changeit"
alias="mydomain" key-password="changeit"/>
</ssl>
</server-identities>
</security-realm>
Undertow subsystem:
<https-listener name="default-https" security-realm="UndertowRealm"
socket-binding="https"/>
Key was generated and placed in $JBOSS_HOME/standalone/configuration
keytool -genkey -noprompt -alias mydomain -dname "CN=mydomain,
OU=mydomain, O=mydomain, L=none, S=none, C=SI" -keystore
keycloak.jks -storepass changeit -keypass changeit
Port 8443 is exposed via Docker.
Accessing https://mydomain:8443/ in chrome results in ERR_CONNECTION_CLOSED. Firefox returns "Secure Connection Failed, the connection was interrupted..."
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
Input:
GET / HTTP/1.1
Host: https://mydomain:8443
This returns the Keycloak welcome page.
So clearly WildFly is working but I am being blocked by the browsers for whatever reason. What could this reason be? I was under the impression that I should be able to add an exception for self signed certificate in either browser. Maybe the generated key length is too short or maybe I am hitting some other security constraint imposed by Firefox/Chrome?
Using these parameters in keytool solved the problem: -keyalg RSA -keysize 2048
... -dname "CN=mydomain
The certificate is probably malformed. The Browsers and other user agents, like cURL and OpenSSL, use different policies to validate a end-entity certificate. The browser will reject a certificate if the hostname is in the Common Name (CN), while other user agents will accept it.
The short answer to this problem is, place DNS names in the Subject Alternate Name (SAN), and not the Common Name (CN).
You may still encounter other problems, but getting the names right will help immensely with browsers.
However, OpenSSL client works nicely:
openssl s_client -connect mydomain:8443
OpenSSL prior to 1.1.0 did not perform hostname validation. Prior version will accept any name.
cURL or Wget would be a better tool to test with in this case.
For reading on the verification you should perform when using OpenSSL, see:
SSL/TLS Client
For reading on the rules for hostnames and where they should appear in a X509 certificate, see:
How do you sign Certificate Signing Request with your Certification Authority?
How to create a self-signed certificate with openssl?

CakePHP 3 - Enable SSL on development server [duplicate]

OS: Ubuntu 12.04 64-bit
PHP version: 5.4.6-2~precise+1
When I test an https page I am writing through the built-in webserver (php5 -S localhost:8000), Firefox (16.0.1) says "Problem loading: The connection was interrupted", while the terminal tells me "::1:37026 Invalid request (Unsupported SSL request)".
phpinfo() tells me:
Registered Stream Socket Transports: tcp, udp, unix, udg, ssl, sslv3,
tls
[curl] SSL: Yes
SSL Version: OpenSSL/1.0.1
openssl:
OpenSSL support: enabled
OpenSSL Library Version OpenSSL 1.0.1 14 Mar 2012
OpenSSL Header Version OpenSSL 1.0.1 14 Mar 2012
Yes, http pages work just fine.
Any ideas?
See the manual section on the built-in webserver shim:
http://php.net/manual/en/features.commandline.webserver.php
It doesn't support SSL encryption. It's for plain HTTP requests. The openssl extension and function support is unrelated. It does not accept requests or send responses over the stream wrappers.
If you want SSL to run over it, try a stunnel wrapper:
php -S localhost:8000 &
stunnel3 -d 443 -r 8080
It's just for toying anyway.
It's been three years since the last update; here's how I got it working in 2021 on macOS (as an extension to mario's answer):
# Install stunnel
brew install stunnel
# Find the configuration directory
cd /usr/local/etc/stunnel
# Copy the sample conf file to actual conf file
cp stunnel.conf-sample stunnel.conf
# Edit conf
vim stunnel.conf
Modify stunnel.conf so it looks like this:
(all other options can be deleted)
; **************************************************************************
; * Global options *
; **************************************************************************
; Debugging stuff (may be useful for troubleshooting)
; Enable foreground = yes to make stunnel work with Homebrew services
foreground = yes
debug = info
output = /usr/local/var/log/stunnel.log
; **************************************************************************
; * Service definitions (remove all services for inetd mode) *
; **************************************************************************
; ***************************************** Example TLS server mode services
; TLS front-end to a web server
[https]
accept = 443
connect = 8000
cert = /usr/local/etc/stunnel/stunnel.pem
; "TIMEOUTclose = 0" is a workaround for a design flaw in Microsoft SChannel
; Microsoft implementations do not use TLS close-notify alert and thus they
; are vulnerable to truncation attacks
;TIMEOUTclose = 0
This accepts HTTPS / SSL at port 443 and connects to a local webserver running at port 8000, using stunnel's default bogus cert at /usr/local/etc/stunnel/stunnel.pem. Log level is info and log outputs are written to /usr/local/var/log/stunnel.log.
Start stunnel:
brew services start stunnel # Different for Linux
Start the webserver:
php -S localhost:8000
Now you can visit https://localhost:443 to visit your webserver: screenshot
There should be a cert error and you'll have to click through a browser warning but that gets you to the point where you can hit your localhost with HTTPS requests, for development.
I've been learning nginx and Laravel recently, and this error has came up many times. It's hard to diagnose because you need to align nginx with Laravel and also the SSL settings in your operating system at the same time (assuming you are making a self-signed cert).
If you are on Windows, it is even more difficult because you have to fight unix carriage returns when dealing with SSL certs. Sometimes you can go through the steps correctly, but you get ruined by cert validation issues. I find the trick is to make the certs in Ubuntu or Mac and email them to yourself, or use the linux subsystem.
In my case, I kept running into an issue where I declare HTTPS somewhere but php artisan serve only works on HTTP.
I just caused this Invalid request (Unsupported SSL request) error again after SSL was hooked up fine. It turned out to be that I was using Axios to make a POST request to https://. Changing it to POST http:// fixed it.
My recommendation to anyone would be to take a look at where and how HTTP/HTTPS is being used.
The textbook definition is probably something like php artisan serve only works over HTTP but requires underlying SSL layer.
Use Ngrok
Expose your server's port like so:
ngrok http <server port>
Browse with the ngrok's secure public address (the one with https).
Note: Though it works like a charm, it seems an overkill since it requires internet and would appreciate better recommendations.