How to permanenty save sub interface IP (logic/virual interface) on solaris - solaris-10

I am using Solaris 10.
I want to add sub interface (including ip and mask) and save it to keep it when server reboot (Ex: Bge0:1, Bge0:2...)
General, I am use NetConf to add sub interface and assign iP for it, but it is too long for add multiple Interfaces.
Is there other way to do it like creating a file and runing it.
Thanks.

Add in /etc files like hostname.bge0:1 and so on with appropriate information
yourhostname netmask + broadcast + up
Also you need to add in /etc/hosts pairs for those hostnames
<your IP> yourhostname
Then plumb the interface
ifconfig bge0:1 plumb
and then up it
ifconfig bge0:1 `cat /etc/hostname.bge0:1`
Do not forget to add appropriate netmask record in /etc/netmasks
<your IP network> <your IP network netmask>

Related

PhpStorm "rsa key is corrupt or has the wrong version"

I'm using PhpStorm 2018.2 and attempting to connect to remote host using SSH key (I can connect via ssh on terminal).
When I enter the (newly created) rsa key into the remote host settings I get the error "'{path/to/key}_rsa' is corrupt or has unknown format" ... see image attached.
I have seen some bits about converting the key to an ssh2 key using this command
ssh-keygen -e -f ~/.ssh/key_rsa > ~/.ssh/key_rsa_ssh2
and using that in PhpStorm instead but with no luck.
To expand on #eugenemorozov's answer. I had to do these 2 points.
add the private key(s) to ssh-agent using ssh-add command;
i did this by following this guide.
choose OpenSSH Config and authentication agent authentication type option when configuring SFTP Deployment Connection options.
The SSH library we use doesn't support these keys.
We're looking for solutions currently, as a workaround, please use ssh-agent and choose this authentication type in the Deployment Configuration.
https://youtrack.jetbrains.com/issue/PY-24325
What worked for me was to convert the key in puttygen, like this: https://youtrack.jetbrains.com/issue/IDEA-284623

go-ethereum - geth - puppeth - ethstat remote server : docker: command not found

I'm trying to setup a private ethereum test network using Puppeth (as Péter Szilágyi demoed in Ethereum devcon three 2017). I'm running it on a macbook pro (macOS Sierra).
When I try to setup the ethstat network component I get an "docker configured incorrectly: bash: docker: command not found" error. I have docker running and I can use it fine in the terminal e.g. docker ps.
Here are the steps I took:
What would you like to do? (default = stats)
1. Show network stats
2. Manage existing genesis
3. Track new remote server
4. Deploy network components
> 4
What would you like to deploy? (recommended order)
1. Ethstats - Network monitoring tool
2. Bootnode - Entry point of the network
3. Sealer - Full node minting new blocks
4. Wallet - Browser wallet for quick sends (todo)
5. Faucet - Crypto faucet to give away funds
6. Dashboard - Website listing above web-services
> 1
Which server do you want to interact with?
1. Connect another server
> 1
Please enter remote server's address:
> localhost
DEBUG[11-15|22:46:49] Attempting to establish SSH connection server=localhost
WARN [11-15|22:46:49] Bad SSH key, falling back to passwords path=/Users/xxx/.ssh/id_rsa err="ssh: cannot decode encrypted private keys"
The authenticity of host 'localhost:22 ([::1]:22)' can't be established.
SSH key fingerprint is xxx [MD5]
Are you sure you want to continue connecting (yes/no)? yes
What's the login password for xxx at localhost:22? (won't be echoed)
>
DEBUG[11-15|22:47:11] Verifying if docker is available server=localhost
ERROR[11-15|22:47:11] Server not ready for puppeth err="docker configured incorrectly: bash: docker: command not found\n"
Here are my questions:
Is there any documentation / tutorial describing how to setup this remote server properly. Or just on puppeth in general?
Can I not use localhost as "remote server address"
Any ideas on why the docker command is not found (it is installed and running and I can use it ok in the terminal).
Here is what I did.
For the docker you have to use the docker-compose binary. You can find it here.
Furthermore, you have to be sure that an ssh server is running on your localhost and that keys have been generated.
I didn't find any documentations for puppeth whatsoever.
I think I found the root cause to this problem. The SSH daemon is compiled with a default path. If you ssh to a machine with a specific command (other than a shell), you get that default path. This does not include /usr/local/bin for example, where docker lives in my case.
I found the solution here: https://serverfault.com/a/585075:
edit /etc/ssh/sshd_config and make sure it contains PermitUserEnvironment yes (you need to edit this with sudo)
create a file ~/.ssh/environment with the path that you want, in my case:
PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
When you now run ssh localhost env you should see a PATH that matches whatever you put in ~/.ssh/environment.

GCE + Load Balancer + Instance without public IP

I have instance that on purpose does not have public IP.
I have GCE Network Load Balancer that is using above instance as target pool.
Everything works great.
Then I wanted my instance to communicate with internet so I followed this documentation: https://cloud.google.com/compute/docs/networking#natgateway (Configuring a NAT gateway)
Instance can communicate with internet fine but load balancer cannot communicate with my instance anymore.
I think that these steps create the issue with loadbalancer:
$ gcloud compute routes create no-ip-internet-route --network gce-network \
--destination-range 0.0.0.0/0 \
--next-hop-instance nat-gateway \
--next-hop-instance-zone us-central1-a \
--tags no-ip --priority 800
user#nat-gateway:~$ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
```
Do you know what can be done to make both things work together ?
I have recreated the environment you've described and did not run into any issues.
$ gcloud compute routes create no-ip-internet-route --network gce-network \
--destination-range 0.0.0.0/0 \
--next-hop-instance nat-gateway \
--next-hop-instance-zone us-central1-a \
--tags no-ip --priority 800
The only thing that the above command will do, is create a routing rule so that instances with no external traffic are pointed to the NAT gateway for any traffic they want need to send out. This will not affect the LB's ability to reach your instance.
In my test, I followed the exact guide you referenced, which you can find here, and that results in:
1 new network
1 firewall rule to allow SSH on port 22
1 firewall rule to allow all internal traffic
1 new instance to act as a NAT Gateway
1 new instance internal instance with no external IP address
I also added the internal instance to a 'TargetPool' and created an LB for the purpose of the test.
My internal instance was accessible both via the LB's external address and internally via the NAT Gateway. The internal instance was also able to communicate with the Internet due to the NAT Gateway's configuration. Everything works as expected.
My recommendation for you and other readers (as this post is now rather old) is to try again. Make sure that you do not have any other conflicting rules, routes or software.

Do Google Compute instances have a stable public DNS name?

This is a question in two parts:
Do GCE instances have a stable public DNS name? The default DNS name for instance with public IP a.b.c.d seems to be d.c.b.a.bc.googleusercontent.com
If yes, what's the best way to obtain this information? Here's the hack I've been using thus far:
EXTERNAL_IP=$(curl -s http://bot.whatismyipaddress.com/)
EXTERNAL_DNS=$(dig +short -x ${EXTERNAL_IP})
reverse lookup is okay to do, for IP address you would probably prefer using gcutil
https://developers.google.com/compute/docs/gcutil/tips
EXTERNAL_IP=$(gcutil getinstance --format=csv --zone=[your_zone] [your_instance] | grep external-ip | cut -d "," -f 2)
GCE instances don't currently have a public DNS name for their external IP address. But there is now a gcloud compute config-ssh (docs) command that's a pretty good substitute.
This will insert Host blocks into your ~/.ssh/config file that contain the IP address and configuration for the host key.
Although this only helps with SSH (and SSH-based applications like Mosh and git+ssh), it does have a few advantages over DNS:
There is no caching/propagation delay as you might have with DNS
It pre-populates the right host key, and the host key is checked the right way even if the ephemeral IP address changes.
Example:
$ gcloud compute config-ssh
...
$ ssh myhost.us-west1-b.surly-koala-232

UnknownHostException while formatting HDFS

I have installed CDH4 on CentOS 6.3 64-bit in Pseudo Distributed mode using the following instructions. Everything is set to localhost in the Hadoop configuration files. But, still when I format the name node the below exception appears. When I add an 192.168.1.101 CentOSHost entry to the /etc/hosts file the exception goes away and I am able to run format/start HDFS and run MR jobs.
I want to run MR jobs even when I am not connected to the network without adding an entry to the /etc/hosts file. How to get this done?
12/08/27 22:17:15 WARN net.DNS: Unable to determine address of the host-falling back to "localhost" address
java.net.UnknownHostException: CentOSHost: CentOSHost
at java.net.InetAddress.getLocalHost(InetAddress.java:1360)
at org.apache.hadoop.net.DNS.resolveLocalHostIPAddress(DNS.java:283)
at org.apache.hadoop.net.DNS.(DNS.java:59)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newBlockPoolID(NNStorage.java:1017)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.newNamespaceInfo(NNStorage.java:565)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:145)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:724)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1095)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
It looks like some where the configuration is returning/ using the hostname as CentOSHost.
What does hostname --fqdn returns to you?
For Hadoop, it is important that name look-up and reverse look-up work successfully. You should be able to resolve the ip-address and resolve hostname from the ip-address (Reverse resolution). This can be tested using the above command.
The entry to /etc/hosts is required for the reverse resolution to work. Unless the entry and the configuration are pointing to localhost. Even in that case the hostname --fqdn should return as localhost.