I have domain my-domain.com, I want to setup (DNS) sub domains to point to my 'PUBLIC IP' where my website is hosted
Currently I have this, setup in DNS:
--------------------------------------------------------------
| Domain | Type | Content |
==============================================================
| www.my-domain.com | A | 'PUBLIC IP' |
--------------------------------------------------------------
and in my website host box, I setup one nginx server block point to my main website 'www.my-domain.com'
server {
listen 80;
server_name www.my-domain.com
location / {
proxy_pass http://127.0.0.1:8080
}
}
Now I want to run another website in the same box and i want it to be accessible using blogs.my-domain.com. With this, I'm going to setup another nginx server block as :
server {
listen 80;
server_name blogs.my-domain.com
location / {
proxy_pass http://127.0.0.1:8081
}
}
How do i configure blogs.my-domain.com DNS Entry?
Thanks in advance for the help.
You can create it as a CNAME record:
blogs.my-domain.com CNAME www.my-domain.com
Note that with some DNS software the above won't work because it is using relative domains, i.e. they don't terminate in a ..
Related
Basically what I want is this:
first.name.com:25565 -> 127.0.0.1:25562
second.name.com:25565 -> 127.0.0.1:25565
This is for some minecraft server's I'm hosting.
What you are looking for is Name-based virtual hosting. At the layer 4 transport, you can only redirect to different services by IP or port number, however, a number of protocols including HTTP(S) transmit the domain name used in the request and this allows a reverse proxy service such as Apache or Nginx to redirect to the actual service on the same or even a different host. Squid is normally used as a forward proxy on the client side which is not helpful in this case. What you want is a reverse HTTP(S) proxy on the server side. I am most familiar with Apache so I will present that here, but Nginx and others can do it as well. You will need the name-based virtual hosting of Apache to create a different service per hostname and then reverse proxy it to the real service behind it. As a note, you can't both have Apache running on 1234 and
Listen 10.1.1.1:1234
NameVirtualHost 10.1.1.1:1234
<VirtualHost 10.1.1.1:1234>
ServerName first.name.com
ProxyPreserveHost On
ProxyPass "/" "http://127.0.0.1:4321/"
ProxyPassReverse "/" "http://127.0.0.1:4321/"
</VirtualHost>
<VirtualHost 10.1.1.1:1234>
ServerName second.name.com:1234
ProxyPreserveHost On
ProxyPass "/" "http://127.0.0.1:1234/"
ProxyPassReverse "/" "http://127.0.0.1:1234/"
</VirtualHost>
You also need to make sure that the mod_proxy and mod_proxy_http modules are enabled for Apache. On Debian/Ubuntu, this can be done with this:
$ sudo a2enmod proxy
$ sudo a2enmod proxy_http
And the final note, you asked for the same port from the proxy, 1234, to be redirected to the local host on 127.0.0.1. Normally, I would recommend using a different port for the actual service, but you can share the port if to bind Apache to the external IP explicitly as I did in the example above using 10.1.1.1, and then bind the internal service only to 127.0.0.1. If you use the normal wildcard binding which it written as either 0.0.0.0 or *, then the two services will conflict.
Ok, so here's what I ended up doing:
mc.name.com is pointed at the server's hostname using a CNAME record
The next record I added was an SRV record to make 25565 point at 25562 (or whatever port I need it to be)
_minecraft._tcp.mc.muchieman.com SRV 900 0 5 25562 mc.muchieman.com.
900 being TLS, 0 being priority, 5 being weight, 25562 being the port to point to
I have a kubernetes cluster using nginx controller to proxy requests to the backend. There is an LB in the front.
LB <-> Nginx Ingress <-> WLS in K8s
When I terminate the SSL at the LB, and the backend sends a redirect it will send the redirect with location that starts with http. However, WebLogic recognizes WL-PROXY-SSL request header to send a https redirect.
I am trying to set the request header on the Nginx Ingress controller for a specific URL patterns only.
Tried using
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header WL-PROXY-SSL: "true";
It didn't work.
Even tried ....
more_set_headers "WL-PROXY-SSL: true";
nginx.org/location-snippets: |
proxy_set_header "WL-PROXY-SSL: true";
Also tried the custom-headers module but it sets for all resources. While I see the entry in the nginx.conf, it is not taking effect even with global custom-headers configMap also.
Is there any good example of adding this header to the request ?
Thanks in advance.
1. What I've tried
I want to make ocp cluster (actually, single node-all in one) like this blog
link : openshift.com/blog/revamped-openshift-all-in-one-aio-for-labs-and-fun
and I also referred to official document : Installing bare metal
So, What I have tried is like this :
(I used VirtualBox to make four vm)
- 1 bastion
- 1 dns
- 1 master
- 1 bootstrap
These vm are in the same network.
First, I made ignition file to boot master and bootstrap node.
install-config.yaml that I used :
apiVersion: v1
baseDomain: hololy-local.com
compute:
- hyperthreading: Enabled
name: worker
replicas: 0
controlPlane:
hyperthreading: Enabled
name: master
replicas: 1
metadata:
name: test
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
fips: false
pullSecret: '{"auths": ...}'
sshKey: 'ssh-ed25519 AAAA...'
I only changed baseDomain, master's number of replica, pullSecret and sshKey.
After Making ignition files, I started to boot bootstrap node and master node with iso file.
bootstrap node was successfully installed, but problem happens master node.
2. Details
Before starting Master node installation, I have to set up dns. Because unlike bootstrap's installation, Master node requests domain info during installation.
ip address
dns : 192.168.56.114
master : 192.168.56.150
DNS Zone is like this :
And I started to set up master node using this parameters.
coreos.inst.install_dev=sda
coreos.inst.image_url=http://192.168.56.114/rhcos438.x86_64.raw.gz
coreos.inst.ignition_url=http://192.168.56.114/master.ign
ip=192.168.56.150::192.168.56.254:255.255.255.0:core0.hololy-local.com:enp0s3:none nameserver=192.168.56.114
Installation finished successfully, but when it boots without boot disk(.iso) Error comes out.
It seems to trying to find master configuration file in api-int.aio.hololy-local.com:22623, and It connects ip address that I wrote in the zone file.
But strangely, The connection refused continuously.
Since I set the static ip when rhcos installation, so Ping test works successfully to 192.168.56.150.
I think 22623 port was blocked. But How can I open the port before OS boot?...
I don't know how to I solve it.
Thanks.
I solved it.
The differences between installation of 3.11 and 4.x is whether LB's necessary.
In 4.x LB is necessary. so you should set up LB.
In my situation, I set LB by nginx, and the sample is like this:
stream{
upstream ocp_k8s_api {
#round-robin;
server 192.168.56.201:6443; #bootstrap
server 192.168.56.202:6443; #master1
server 192.168.56.203:6443; #master2
server 192.168.56.204:6443; #master3
}
server {
listen 6443;
proxy_pass ocp_k8s_api;
}
upstream ocp_m_config {
#round-robin;
server 192.168.56.201:22623; #bootstrap
server 192.168.56.202:22623; #master1
server 192.168.56.203:22623; #master2
server 192.168.56.204:22623; #master3
}
server {
listen 22623;
proxy_pass ocp_m_config;
}
upstream ocp_http {
#round-robin;
server 192.168.56.205:80; #worker1
server 192.168.56.206:80; #worker2
}
server{
listen 80;
proxy_pass ocp_http;
}
upstream ocp_https {
#round-robin;
server 192.168.56.205:443; #worker1
server 192.168.56.206:443; #worker2
}
server{
listen 443;
proxy_pass ocp_https;
}
}
thanks.
How to create FiWare instance and connect it to internet?
I like the idea and I have big plans on using this infrastructure, but...
I've trying to create instance and make ssh connection to it for some time now.
Created key-pair
Created security group (22,3306,1)
Created instance ubuntu 14 (also tried others)
Also tried ubuntu 12, POI and others already
Added node-int-net-01 and node-int-noinet-net-02 to it when creating
Also tried already with 1 network only
Allocated floating IP
Associated it with the local IP that came from "node-int-net-01"
Statuses:
Instance: ACTIVE, Power State RUNNING
"node-int-net-01" networks in list: shared-subnet 192.168.192.0/18 Yes ACTIVE UP
Inside "node-int-net-01":
Network: Admin State: DOWN, Shared: No, External Network: No
Subnet: DHCP and all ok
Ports: Status: BUILD, Admin State: UP
The confusing parts are (for clue, don't have to answer those if we have solution):
How can network be EXTERNAL-SHARED-ACTIVE-UP and DOWN-NOT_SHARED-NO_EXTERNAL at the same time - perhaps there's an error
What means Port status: BUILD, i mean it must have been building the port like 3 days already. Should i build there something, is it an order or status? Perhaps it means BUILT or BUILDING instead.
What means instance ACTIVE? Is it still active (busy) and i should wait? Or it can be actively used already? From VM Display I never saw it going to unix prompt>, is it kind of fiware itself using this telnet instance? I rather saw things like
"request error",
"connection timeout",
"socket.error",
"Error 101 Network is unreachable".
"cloud-init-nonet [13:31]: waiting 120 seconds for network device"
numerous black-screens and never ending Booting from hard-disk
from Instance log saw endless: "Waiting for network configuration", but that one was cured
Thou i saw "localhost login prompt, but as i only created PEM, then
cant imagine what to do with it - where do i get root/pwd? But i guess it was some error that it ended up there.
The latest status from Instance\Log is:
cloud-init-nonet[4.52]: static networking is now up
* Starting configure network device[74G[ OK ]
* Starting Mount network filesystems[74G[ OK ]
* Stopping Mount network filesystems[74G[ OK ]
* Stopping cold plug devices[74G[ OK ]
* Stopping log initial device creation[74G[ OK ]
* Starting enable remaining boot-time encrypted block devices[74G[ OK ]
Cloud-init v. 0.7.5 running 'init' at Sat, 16 Apr 2016 01:23:11 +0000. Up 5.07 seconds.
ci-info: ++++++++++++++++++++++++++++Net device info++++++++++++++++++++++++++++
ci-info: +--------+------+-----------------+---------------+-------------------+
ci-info: | Device | Up | Address | Mask | Hw-Address |
ci-info: +--------+------+-----------------+---------------+-------------------+
ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . |
ci-info: | eth0 | True | 192.168.242.127 | 255.255.192.0 | fa:16:3e:7a:47:94 |
ci-info: +--------+------+-----------------+---------------+-------------------+
ci-info: +++++++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++++++
ci-info: +-------+---------------+---------------+---------------+-----------+-------+
ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
ci-info: +-------+---------------+---------------+---------------+-----------+-------+
ci-info: | 0 | 0.0.0.0 | 192.168.192.1 | 0.0.0.0 | eth0 | UG |
ci-info: | 1 | 192.168.192.0 | 0.0.0.0 | 255.255.192.0 | eth0 | U |
ci-info: +-------+---------------+---------------+---------------+-----------+-------+
For a ping and ssh i get: "Destination Host Unreachable" and "No route to host"
Also tried allocating floating IP with "federation" pool, but with that IP i just got time-outs for ping and ssh
I read already:
wiki
fiware help
stackoverflow
Followed also the steps in this slideshow http://www.slideshare.net/fermingalan/developing-your-first-application-using-fi-ware-20130903
http://cosmos.lab.fi-ware.org/cosmos-gui/ seems to be down
EDIT: can use this one (need to use https and accept bad cert)
http://forge.fiware.org/plugins/mediawiki/wiki/fiware/index.php/FIWARE.OpenSpecification.Data.BigData_R4#Basic_concepts
http://catalogue.fiware.org/enablers/bigdata-analysis-cosmos/documentation - no info about it neither.
Any ideas? Perhaps there is an UI (other than the web page at https://cloud.lab.fiware.org/ that seems to be in early beta) for using FiWare (that can do all the "anyway-mandatory" steps for users (developers)?
Maybe the problem is that I'm a software developer not network administrator, and perhaps this interface is meant for linux network andministrators.
The message "Error 101 Network is unreachable" shows that there was a problem in the VM network. node-int-net-01 is the shared network to be joined with the public network, while node-int-noinet-net-02 is to be joined with a network to use VPN. You shouldn't use both networks in the same VM, just you should use node-int-net-01.
The code messages like BUILD, ACTIVE and so on, are codes belonging to Openstack.
Regarding ping, you should open the icmp port in the security port to allow it.
Anyway, if you continue having problems, you can send a mail to FIWARE Lab support fiware-lab-help#lists.fiware.org, indicating your concrete data.
Using Nginx, I'm trying to configure my server to accept all domains that point to the IP of my server, by showing them a specific website, but when accessing the www.example.com (main website), I'd show an other content.
Here's what I did so far:
server {
// Redirect www to non-www
listen 80;
server_name www.example.com;
return 301 $scheme://example.com$request_uri;
}
server {
listen 80;
server_name example.com;
// rest of the configuration
}
server {
// Catch all
listen 80 default_server;
// I also tried
// server_name _;
// Without any luck.
// Rest of the configuration
}
The problem with this configuration is that every request made to this server not being www.example.com or example.com is took under example.com server configuration, not the catch all.
I'd like to cath only www.example.com/example.com in the first two configurations, and all the others in the last configuration.
I suggest putting your server on top of the file :)
I think nginx wants default servers to be on top of -a- file.
I have really much files on my server, but there is one with a default server as first server declaration, and that works.