Many infrastructure configurations require a fully qualified domain name to set up.
How do we set an FQDN on google compute instances?
FQDN that can be used internally within the VPC and / or FQDN that can be used externally
Here is a documents that explain how the FQDN works in an instance of compute engine.
Maybe this solution fit your needs:
# edit the hosts file and add your FQDN
$ sudo vi /etc/hosts
# prevent the overwrite
$ sudo +i chattr -i /etc/hosts
Also you can use Google Cloud DNS and use it as internal DNS by editing the resolv.conf file
Related
Where is the Openshift Master and Node Host Files in v4.6
Previously hosted below in v3
Master host files at /etc/origin/master/master-config.yaml
Node host files at /etc/origin/node/node-config.yaml
You can check your current kubelet configuration using the following procedures instead of the configuration file on the node hosts like OCPv3. Because the kubelet configuration was managed dynamically as of OCPv4.
Further information is here, Generating a file that contains the current configuration.
You can check it using above reference procedures(Generate the configuration file) or oc CLI as follows.
$ oc get --raw /api/v1/nodes/${NODE_NAME}/proxy/configz | \
jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'
These files no longer exist in the same for as in OCP 3. To change anything on the machines themselves, you'll need to create MachineConfigs, as CoreOS is an immutable Operating System. If you change anything manually on the filesystem and reboot the machine, your changes will typically be reset.
To modify Worker Nodes, often the setting you are looking for can be configured via a kubeletConfig: Managing nodes - Modifying Nodes. Note that only certain settings can be changed, others cannot be changed at all.
For the Master Config, it depends on what you want to do, as you will potentially change the setting via a machineConfigPool or for example edit API Server setting via oc edit apiserver cluster. So it depends on what you actually want to change.
I have an external TLS-enabled service that I want my pods to access
https://abc.myservice.acme
abc.myservice.acme resolves to 1.2.3.4. I wish to override this IP address with another (say 5.6.7.8) for the pods to use.
I would add an entry for each pod's /etc/hosts to override the IP address, but I
have a feeling that it is an anti-pattern and there's probably a better way of doing this.
I investigated/tried:
creating a service + endpoint. This works, but the problem is the service name is not present in the SSL Certificate's SAN entry, so I'm getting a "SSL: no alternative certificate subject name matches target host name 'svc-external-acme'" message. Sure I can add it to the certificate SAN, but it's probably not the correct solution.
installing DNSmasq (https://developers.redhat.com/blog/2015/11/19/dns-your-openshift-v3-cluster/) on the worker nodes but again it feels like a complicated hack. There must be a simpler one.
hostAliases. Unfortunately, this is only available for kube 1.7+ but I'm on openshift 3.5 (kube 1.6). This would have been perfect.
Is there any way I can accomplish #3 in openshift?
I can edit the image to echo my desired entry to /etc/hosts, but I'm saving it as last resort.
-M
Maybe I'm a bit late answering this question
I had a similar issue with our dev environment and the way we managed to resolve it was:
We created a config-map with the desired content of the /etc/hosts file. I'm using hosts-delta as the name of the config map entry
We define a mount point of that config map inside the container (/app/hosts/). I think the directory /app/hosts should exist within the container filesystem so you should add a RUN mkdir -p /app/hosts in your Dockerfile
We modified the deployment config yaml adding a post start hook in this way:
lifecycle:
postStart:
exec:
command:
- /bin/sh
- '-c'
- |
cat /app/hosts/hosts-delta >> /etc/hosts
The previous snippet should be placed inside the spec > template > spec > containers element
Hope this helps somebody
My gitlab is on a virtual machine on a host server. I reach the VM with a non-standard SSH port (i.e. 766) which an iptable rule then forward from host:766 to vm:22.
So when I create a new repo, the instruction to add a remote provide a mal-formed URL (as it doesn't use the 766 port. For instance, the web interface give me this:
Malformed
git remote add origin git#git.domain.com:group/project.git
Instead of an URL containing :766/ before the group.
Wellformed
git remote add origin git#git.domain.com:766/group/project.git
So it time I create a repo, I have to do the modification manually, same for my collaborator.
How can I fix that ?
In Omnibus-packaged versions you can modify that property in the /etc/gitlab/gitlab.rb file:
gitlab_rails['gitlab_shell_ssh_port'] = 766
Then, you'll need to reconfigure GitLab:
# gitlab-ctl reconfigure
Your URIs will then be correctly displayed as ssh://git#git.domain.com:766/group/project.git in the web interface.
if you configure the ssh_port correctly in config/gitlab.yml, the webpages will show the correct repo url.
## GitLab Shell settings
gitlab_shell:
...
# If you use non-standard ssh port you need to specify it
ssh_port: 766
ps.
the correct url is:
ssh://git#git.domain.com:766/group/project.git
edit: after the change you need to clear caches, etc:
bundle exec rake cache:clear assets:clean assets:precompile RAILS_ENV=production
N.B.: this was tested on an old Giltab version (v5-v6), and might not be suitable for modern instance.
You can achieve similar behavior in a 2 step process:
1. Edit: config/gitlab.yml
On the server, set the port to the one you use:
ssh_port: 766
2. Edit ~/.ssh/config
On your machine, add the following section corresponding to your gitlab:
Host sub.domain.com
Port 766
Limit
You will need to repeat this operation on each user's computer…
References
GitLab and a non-standard SSH port
Easy way to fix this issue:
ssh://git#my-server:4837/~/test.git
git clone -v ssh://git#my-server:4837/~/test.git
Reference URL
I run ipython 0.12.1 on Ubuntu 12.04. You can run it in browser using notebook interface by running:
ipython notebook --pylab
Configuration files can be found in ~/.config/ipython/profile_default/. It seems that connection parameters for every kernel is placed in ~/.config/ipython/profile_default/security/kernel-4e424cf4-ba44-441a-824c-c6bce727e585.json. Here is the content of this file (new files are created as you start new kernels):
{
"stdin_port": 54204,
"ip": "127.0.0.1",
"hb_port": 58090,
"key": "2a105dd9-26c5-40c6-901f-a72254d59876",
"shell_port": 52155,
"iopub_port": 42228
}
It's rather self-explanatory but how can I set a server that would have a permanent configuration, so I can use notebook interface from other computers in the LAN?
If you are using an old version of the notebook, the following could still apply. For new versions see the other answers below.
Relevant section of the IPython docs
The Notebook server listens on localhost by default. If you want it to be visible to all machines on your LAN, simply instruct it to listen on all interfaces:
ipython notebook --ip='*'
Or a specific IP visible to other machines:
ipython notebook --ip=192.168.0.123
Depending on your environment, it is probably a good idea to enable HTTPS and a password when listening on external interfaces.
If you plan on serving publicly a lot, then it's a also good idea to create an IPython profile (e.g. ipython profile create nbserver) and edit the config accordingly, so all you need to do is:
ipython notebook --profile nbserver
To load all your ip/port/ssl/password settings.
The accepted answer/information is for an old version. How to enable remote access to your new jupyter notebook? I got you covered
First, generate a config file if you don't have it already:
jupyter notebook --generate-config
Notice the output of this command as it would tell you where the jupyter_notebook_config.py file was generated. Or if you already have it, it will ask you if you would like to overwrite it with the default config. Edit the following line:
## The IP address the notebook server will listen on.
c.NotebookApp.ip = '0.0.0.0' # Any ip
For added security, type in a python/IPython shell:
from notebook.auth import passwd; passwd()
You will be asked to input and confirm a password string. Copy the contents of the string, which should be of the form type:salt:hashed-password. Find and edit the lines as follows:
## Hashed password to use for web authentication.
#
# To generate, type in a python/IPython shell:
#
# from notebook.auth import passwd; passwd()
#
# The string should be of the form type:salt:hashed-password.
c.NotebookApp.password = 'type:salt:the-hashed-password-you-have-generated'
## Forces users to use a password for the Notebook server. This is useful in a
# multi user environment, for instance when everybody in the LAN can access each
# other's machine through ssh.
#
# In such a case, server the notebook server on localhost is not secure since
# any user can connect to the notebook server via ssh.
c.NotebookApp.password_required = True
## Set the Access-Control-Allow-Origin header
#
# Use '*' to allow any origin to access your server.
#
# Takes precedence over allow_origin_pat.
c.NotebookApp.allow_origin = '*'
(Re)start your jupyter notebook, voila!
I'm running RabbitMQ V.2.0.0. on a Linux machine. The mnesia base is current the default, but the within that directory Rabbit creates directories, eg. rabbit#ip-123.1.1.123.
The ip in the directory name is based on the inet addr of the machine. This directories hold information about user, exchanges, vhost (I think).
My question is, how can I fix/config these directory names with ip to be not based on ip?
To change the Mnesia directory, just set MNESIA_DIR in /etc/rabbitmq/rabbitmq.conf.
Also, a great place to ask RabbitMQ related questions is on the rabbitmq-discuss mailing list.
It seems you can edit the scripts files (rabbitmq-server, rabbitmq-mulit and rabbitmqcti). In these scripts at the top is a hostname variable.
I set the hostname to localhost and restarted.
This is not the best, but good enough for my requirements. The hostname must be a proper address, it cannot be something arbitrary.
The main problem is that your new machine has new hostname - and directory is named after it (just renaming directory as mentioned before, does not help) so we need to rename your machine hostname and make RabbitMq to work with old files.
Let "ip-0-0-0-0" be old machine name (so there should be a mnesia folder /var/lib/rabbitmq/mnsesia/ip-0-0-0-0), and new machine host
name is something like "ip-1-1-1-1", but new name doesnot matter as we will overwrite it. Execute following commands:
sudo -s
echo "127.0.0.1 ip-0-0-0-0" >> /etc/hosts
echo "ip-0-0-0-0" > /etc/hostname
reboot
After reboot your machine will have a new name and RabbitMq should work with old files.