How to set host IP in zabbix LLD host discovery - zabbix

I am using zabbix 2.4 LLD( low level discovery) to discover hosts in my system.
I have a script which returns Host names in JSON format as described in https://www.zabbix.com/documentation/2.4/manual/discovery/low_level_discovery.
All works fine and new host are created, but their IP address is set to an IP adress of a host their i am running discovery script.
How can I set discovered host IP?
Host Discovery screenshot
Discovered host

Low-level discovery for hosts was originally implemented for VMware in ZBXNEXT-1633, where discovered hosts should apparently use the same address as vCenter or vSphere.
Unfortunately, what you want is currently not possible, but you can follow ZBXNEXT-2717 to be notified when it is implemented.

Related

Cannot access Google Cloud Compute Instance External IP

I have set up an Google Cloud Compute Instance:
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Haswell
Zone
us-east1-c
I can ssh in using the external address.
I have installed the vncserver and can access it on port 5901 from localhost as well as the internal IP.
I am trying to access it from the static, external IP address but it is not working.
I have configured the firewall to open to port to 0.0.0.0/0, but it is not reachable.
Can anyone help?
------after further investigation from the tips from the two answers (thanks, both!), I have a partial answer:
The Google Cloud Compute instance was set, by default, to not allow
HTTP traffic. I reset the configuration to allow HTTP traffic. I
then tried the troubleshooting tip to run a small HTTP service in
python. I was able to get a ressponse from the service over the
internet.
The summary of the current situation is as follows:
The external IP address can be reached
It is enabled and working for SSH
It is enabled and working for HTTP
It does not seem to allow traffic from vncserver
Any idea how to configure the compute instance to allow for vncserver traffic?
If you already verified that Google Firewall or your VM are not blocking packets, you must make sure that VNC service is configured to listen on the external IP address.
You can always use a utility like nmap outside Google project to reveal information on the port status.
enable http/https traffic form the firewall as per the need. it will work!!
The Google Cloud Compute instance was set, by default, to not allow HTTP traffic. I reset the configuration to allow HTTP traffic. I then tried the troubleshooting tip to run a small HTTP service in python. I was able to get a response from the service over the internet.
As such, the original question is answered, I can access Google Cloud Compute Instance External IP. My wider issue is still not solved, but I will post a new, more specific question about this issue
TLDR: make sure you are requesting http not https
In my case i was following the link from my CE instance's External Ip property which takes you directly to the https version and i didn't set up https, so that was causing the 'site not found' error.
Create an entry in your local ssh config file as below with mentioned local forward port. In my case its an example of yarn's IP, which I want to access in browser.
Host hadoop
HostName <External-IP>
User <Local-machine-username>
IdentityFile ~/.ssh/<private-key-for-above-user>
LocalForward 8089 <Internal-IP>:8088
In addition to having the firewall rules to allow HTTP traffic in both Google Cloud Platform and within the OS of the instance, make sure you install a web server such as Apache or Nginx.
After installing the web server, you connect to the instance using SSH and verify you do not get a failed connection with the following command:
$ sudo wget http://localhost
If the connection is positive, it means that you can access your external URL:
http://<IP-EXTERNAL-VM>
Usually there are two main things to check.
1. Port
By default, only port 80, 443 and ICMP are exposed. If your server is running on a different port, create a record for the same.
2. Firewall
Make sure you are allowing http and https traffic based on your need.
oua re
For me the problem was that I set up the traffic for the firewall rule to be 'Egress' instead of 'Ingress'.
If anyone already initiated 'https'
just disable it and check again.

Hadoop cluster on Google Compute Engine: Accessing master node via REST

I have deployed a hadoop cluster on google compute engine. I then run a machine learning algorithm (Cloudera's Oryx) on the master node of the hadoop cluster. The output of this algorithm is accessed via an HTTP REST API. Thus I need to access the output either by a web browser, or via REST commands. However, I cannot resolve the address for the output of the master node which takes the form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091.
I have allowed http traffic and allowed access to ports 80 and 8091 on the network. But I cannot resolve the address given. Note this http address is NOT the IP address of the master node instance.
I have followed along with examples for accessing IP addresses of compute instances. However, I cannot find examples of accessing a single node of a hadoop cluster on GCE, that follows this form http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091. Any help would be appreciated. Thank you.
The reason you're seeing this is that the "HOSTNAME.c.PROJECT.internal" name is only resolvable from within the GCE network of that same instance itself; these domain names are not globally visible. So, if you were to SSH into your master node first, and then try to curl http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 then you should successfully retrieve the contents, whereas trying to access from your personal browser will just fail to resolve that hostname into any IP address.
So unfortunately, the quickest way for you to retrieve those contents is indeed to use the external IP address of your GCE instance. If you've already opened port 8091 on the network, simply use gcutil getinstance CLUSTER_NAME-m and look for the entry specifying external IP address; then plug that in as your URL: http://[external ip address]:8091.
If you turned up the cluster using bdutil, a more involved but nicer way to access your cluster is by running the bdutil socksproxy command. This opens a dynamic-port-forwarding SSH tunnel to your master node as a SOCKS5 proxy, so that you can then configure your browser to use localhost:1080 as your proxy server, make sure to enable remote DNS resolution, and then visit your browser using the normal http://CLUSTER_NAME-m.c.PROJECT_NAME.internal:8091 URL.

SSH into GCE VM via web browser with restricted IP addresses

I've setup my VM to use a network only allowing a whitelist of IP addresses on the SSH protocol on port 22.
If I try to SSH into my instance via the web browser within the developer console the connection is correctly refused, as it isn't originating from one of my permitted IP addresses.
I'm curious if there is a way to have my whitelist of IP addresses and still SSH into the VM via the browser. I know I can still connect using gcutil, and it would obviously work if I had the IP address.
Looking at the documentation, it isn't listed as a known issue.
When connecting from Developer Console SSH tool the instance receives connection from Google IP range, I made a test and it was from 74.125.0.0/16 range. You could try to temporary white list this range and see if you can access.
Regards
Paolo

Setting IP address of the Guest operating system while launching qemu

I am launching qemu using qemu-system-x86_64 along with options. What options should I give to assign IP address for guest OS I launched, so that I can ping the guest os from my host machine.
Can anyone help me on this and post if there any other way to assign IP address of Guest OS other than passing it from command line of qemu-system-x86-64?
Thanks.
I haven't found a good solution to do this via command line.
First and foremost, your best bet is probably Cloud-Init. I've had varying success but I also haven't spent a ton of time perfecting it, either.
You could utilize DHCP and get the IP address from the guest agent after the VM boots. If you are placing it in a network that doesn't have DHCP then you could consider using dnsmasq on Proxmox.
If you're using multiple VLANs, you could also consider building the VM in a VLAN with DHCP (either from your router or using the aforementioned Proxmox dnsmasq approach) and then SSH / RDP in and set the static address and move the NIC to the right VLAN.
If you're trying to automate this deployment, I'd recommend using Terraform and Ansible (Terraform to build, Ansible to configure) to accomplish this. I've found the best approach is to configure and trigger the Terraform with Ansible and then save the IPs as facts. You can then use the facts to delegate the Ansible task to the temporary IP and then log in to set the static IP address. If you're changing VLANs then you can either use Terraform or Ansible to adjust the config but I've found Terraform to be best for this task.

Heroku Node.js Remote Mysql Database IP Address

I have a remote Mysql database that I am connecting to through Node.js on Heroku. My MySql host (bluehost) wants me to input IP Adresses of all remote MySql connections.
Heroku doesn't have a dedicated IP for my app, so how can I connect to it? Bluehost mentions something about a Class C IP on its page, but I'm not sure Heroku has one...
Also, I believe I already have all of the heroku environment variables set up correctly:
(heroku config:add EXTERNAL_DATABASE_URL=...)
Thanks :D
Here's what blue host says about dynamic ip addresses:
Dynamic IP Addresses
Having a dynamic IP address means that the connecting IP address can
change periodically depending on the Internet Service Provider (ISP).
You must update the connecting IP in Remote MySQL every time it
changes.
from https://my.bluehost.com/cgi/help/89.
So at least each time you redeploy your application, you have a chance to get a different ip address. So this seems highly impractical. Why don't you use Heroku's MySQL offering?
You can use one of 'static ip' addons and proxy connection via that static ip - see this discussion