Google Maps through proxy server with NGINX - google-maps

I am trying to install an Angular application which uses google maps in a very restricted intranet. I have access to Google Maps through my server (which serves my app using NGINX), but not from my client. So those are the steps I took so far.
1 - I set the server IP for maps.googleapis.com in my /etc/hosts file in the client.
2 - I set the google maps IP in my /etc/hosts file in the server.
3 - I created a conf file so Nginx knows it needs to proxy pass this domain:
server{
listen 80;
server_name maps.googleapis.com;
location/ {
proxy_pass http://216.58.212.10/$uri$is_args$args;
proxy_set_header Host $host:$server_port;
}
}
I can download the first google maps api request:
http://maps.googleapis.com/maps/api/js?v=3.exp&libraries=visualization&sensor=false&callback=onGoogleReady
But when it tries to download this one:
http://maps.googleapis.com/maps/api/js/ViewportInfoService.GetViewportInfo?1m6&1m2&1d38.48493576049805&2d-9.36532974243164&2m2&1d38.97382736206055&2d-8.891716003417969&2u12&4sen-US&5e0&6sm%40366000000&7b0&8e0&callback=_xdc_._kxuspe&token=52829
It shows me this error:
The Google Maps Javascript must be downloaded directly from Google's servers.
Am I missing something here? Did anybody do that before? An more importantly: is it possible?

Related

redirect sub subdomain to subdomain in cloudflare

I am trying to redirect app.test.dev.mydomain.io to another origin app.dev.mydomain.io because there is an IOS app which uses this origin, but I got the error of ERR_SSL_VERSION_OR_CIPHER_MISMATCH
and the app is running on kubernetes cluster with ingress controller to access it.
do you have any suggestions to doing that?

Connect to openshift app via lwip embedded hardware

I have uploaded a simple Rest API application in Openshift (starter program).
I have also an STM32 based hardware running Lwip (TCP/IP) protocol and my goal is to connect it to the above openshift app.
LWIP uses a function (tcp_connect) which uses the external ip of the app.
However I am struggling to understand and find the external IP of the openshift service running in a pod
Any suggestions?

How to find the external IP?

I have a Python application which has been deployed to openshift.
I am using an external REST service in my application. In order to use this service, the developers of the REST service have to whitelist my IP because a Firewall blocks unauthorized IP addresses.
How can I find the external IP of my application? How can I find it in openshift? I tried a few OC commands, but I am not sure if I have to get the IP of the pod or the service.
Out of the box the traffic from internal cluster components will appear to external infrastructure like they are coming from whichever OpenShift compute host their pods are currently scheduled on.
Information on internal cluster networking and how traffic traverses from a process running inside a pod to the external network can be found at SDN: Packet Flow.
In your case you could have the external application whitelist all of the ip addresses of the compute hosts that are expected to run your application pods.
Alternately you could set up an EgressIP. This will cause all traffic originating from a specific OpenShift project to appear as if it is originating from a single ip address. You could then have your external application whitelist the EgressIP address.
Documentation for configuring EgressIP can be found in the official documentation under Enabling Static IPs for External Project Traffic
What you are searching for is the external IP of the Service. A Service acts as a load balancer for your pods but by default it only has a cluster-wide IP address. If you need a URL to access it from the outside, you can create a Route. For your purpose where you need an actual external IP address, you can assign the Service an external IP manually. Information on how to do this can be found in the official OpenShift Docs.

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Error connecting to service hosted in service fabric cluser in azure from my browser

Authored web api service hosted in service fabric.
Navigated successfully to the service endpoint (on my machine) with
following url: http://localhost:2500/days/v1.0/ (i.e. I can see the response).
Next created a UNSECURED service fabric cluster in the azure.
Published my local fabric app to azure through visual studio.
Successfully navigated to fabric explorer in the azure with url: http://xyz1234fake.westus.cloudapp.azure.com:19080/Explorer
When looked at my service instance in explorer, it shows the url as http://10.0.0.5:2500/days/v1.0/
In the browser, replaced the above local azure ip address with azure service cluster. For example: changed the url from http://10.0.0.5:2500/days/v1.0/ to http://xyz1234fake.westus.cloudapp.azure.com:2500/days/v1.0/
Was not able to navigate to above url.
What am I doing wrong? Where should I look for troubleshooting?
19000 and 19080 are reserved for communication to the cluster itself (19080 for the Explorer). You need to set up a new load balancer rule/probe for your application. You can do this in the Azure Portal under "Load Balancer".
You need to open that port on the cluster. 19080 is open by default for non SSL connections, so if you just switch to that it'd work. Be careful not to use a port reserved for your services.
This is similar