failed to negotiate security protocol: privnet: could not read full nonce - ipfs

First post on stackoverflow so sorry if this question is lacking any necessary information.
I am trying to setup a private ipfs swarm that is hosted on GCP. I have a bootstrap node that creates the swarm.key that is then distributed out to the other nodes and the addresses are announced. The bootstrap address is configured on the nodes and when I run ipfs swarm connect /dns4/<service address>/tcp/15055/p2p/12D3KooWP*7qv. I recieve the following error [/ip4/<ip address>/tcp/15055] failed to negotiate security protocol: privnet: could not read full nonce.
Anyone know why this could be?

Related

Exposing a Postegres / Patroni db on Openshift to outside world

I am planning to run an SSIS ETL job , which has a sql server as SOURCE db , this is on a physical on-premise machine and the DESTINATION db (postegres/patroni) is running on Openshift platform as pod/containers. The issue I am facing now is like, DB hosted on openshift cannot be exposed via tcp port. As per few articles online, openshift only allows HTTP traffic via “routes”. Is this assumption right? If yes, how in real world people run ETL or bulk data transfer or migration to a db on openshift from outside. I am worried to use HTTP since I feel , it’s not efficient for ETL. Few folks mentioned like, use OC PORT FORWARDING. But for a production app, how an open shift port forwarding be stable? Please throw your comments
In a production environment it is a little questionable if you want to expose your database to the public internet. Normally you probably rather want to go with a site-to-site VPN.
That left aside it is correct that OCP is using routes for most use cases, which are then exposing an http(s) endpoint. If you need plain TCP however, you can create a service of type loadbalancer.
The regular setup with a route is stacked like
route --> service --> pods where the service is commonly of type clusterIP.
with a service of type loadbalancer, you eliminate the route and directly expose a TCP service.
If you run on a public cloud, OCP takes care of the leftover requirements for you. Namely that is to create a Loadbalancer with your cloudprovider. In the case of AWS for example, OCP would create an ELB (Elastic Loadbalancer) for you.
You can find more information in the documentation

Running geth with "--allow-insecure-unlock"

I am trying to send transactions via web3.py interface that is connected to a local geth node. Having read some comments on why using RPC is bad, I am still wondering if using -rpc option is unsafe when port 8545 is closed. According to this article (https://www.zdnet.com/article/hackers-ramp-up-attacks-on-mining-rigs-before-ethereum-price-crashes-into-the-gutter/) the vulnerability is just on exposed ports, but since I am basically communicating with a node on local network this shouldn't be a problem right?
The article covers an attack vector, where the attackers look for machines with opened port 8545 and try to run JSON-RPC commands (on these machines) that would benefit them. This attack only works if:
The machine has the port 8545 opened to public
The port is used by an Ethereum node (and not some arbitrary app)
The node has enabled JSON-RPC
The node hasn't enabled the user/password credentials for JSON-RPC
So as long as your node is only accessible on a local network, you are pretty much safe from this attack vector (assuming there's no port forwarding etc. that would actually allow accessing your node from a public network and that there's no attacker on your local network).

Load Balancer not able to connect with backend

I have deployed the Spring boot app on the OCI compute and its comping up nicely. Compute is created with public ip and have the security list updated to allow connections from internet. But, I wasn't able to hit the end point from internet. For that reason, I thought of configuring the load balancer.
Created load balancer in a separate subnet(10.0.1.0/24), routing table and security list. Configured the LB's security list to send all protocol packets to compute's CIDR(10.0.0.0/24) and configured compute's security list to accept the packets from LB. I was expecting LB to make connection with back end. But, its not.
I am able to hit the LB from internet :-
Lb's routing table with all ips routed through internet gateway. There is no routing defined for compute's CIDR as its in the VCN.
LB has its own security list, which has allowed out going packets to compute and incoming from internet as below:
Compute's security list accepting packet's from LB:
Let me know, if I am missing something here.
My internet gateway :-
My backend set connection configuration from LB:
LB fails to make connection with backend, there seems to be no logging info available :
App is working fine , if I access from the compute node :
The LB has a health check that tests the connection to your service. If it fails, the LB will keep your backend out of rotation and give you the critical health like you're seeing.
You can get to it by looking at the backend set and clicking the Update Health Check button.
Edit:
Ultimately I figured it out, you should run the following commands on your backend:
sudo firewall-cmd --permanent --add-port=8080/tcp
sudo firewall-cmd --reload
Use the port that you configured your app to listen on.
I used httpd instead of spring, but I also did the following
sudo semanage fcontext -a -t httpd_sys_content_t "/var/www/html(/.*)?"
sudo restorecon -F -R -v /var/www/html
I'm not really too familiar with selinux but you may need to do something similar for your application.
Additionally, setting up a second host in the same subnet to login to and test connecting to the other host will help troubleshooting, since it will verify if your app is accessible at all outside the host that it's on. Once it is, the LB should come up fine.
TL;DR In my case it helped to switch the Security List rules from stateful to stateless on the 2 relevant subnets (where the loadbalancer was hosted and where the backends were located).
In our deployment I had a loadbalancer with public IP located on one subnet, while the backend to this loadbalancer was on another subnet. Both subnets had one ingress and one egress rule - to allow everything (i.e. 0.0.0.0/0 and all ports allowed). The backends were still not reachable from the loadbalancer and the healthchecks were failing.
Even despite the fact that in my case as per the documentation switching between stateful and stateless should not have an effect, it solved my issue.

IPFS nodes in Docker and Web UI

This question is referred to the following project which is about Integrating Fabric Blockchain and IPFS.
The architecture basically comprises a swarm of docker containers that should communicate with each other (Three containers: Two peer nodes and one Server node). Every container is an IPFS node and has a separate configuration.
I am trying to run a dockerized environment of an IPFS cluster of nodes and view the WEB UI that comes with it. I set up the app by running all the steps described and then supposedly i would be able to see the WebUI in this address:
http://127.0.0.1:5001
Everything seem to be setup and configured as they should (I checked every docker logs <container>). Nevertheless all i get is an empty page.
When i try to view my local IPFS repository via
https://webui.ipfs.io/#/welcome
I get a message that this is probably caused by a CORS error (it makes sense) and it is suggested to change the IPFS configuration in order to by-pass the CORS error. See this!
Screenshot
I try to implement the solution by changing the Headers in the configuration but it doesn't seem to have any effect.
The confusion relies on the fact that after setting up the containers we have 3 different containers with 3 configurations and in addition the IPFS daemon is running in each one of them. Outside the containers the IPFS Daemon is not running.
I don't know if the IPFS daemon outside the containers should be running.
I'm not sure which configuration (if not all) should i modify.
Should i use a reverse proxy to solve this?
Useful Info
The development is done in a Linux-Ubuntu VM that meets all the necessary requirements.

Access external client IP from behind Google Compute Engine network load balancer

I am running a Ruby on Rails app (using Passenger in Nginx mode) on Google Container Engine. These pods are sitting behind a GCE network load balancer. My question is how to access the external Client IP from inside the Rails app.
The Github issue here seems to present a solution, but I ran the suggested:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
kubectl annotate node $node \
net.beta.kubernetes.io/proxy-mode=iptables;
gcloud compute ssh --zone=us-central1-b $node \
--command="sudo /etc/init.d/kube-proxy restart";
done
but I am still getting a REMOTE_ADDR header of 10.140.0.1.
On ideas on how I could get access to the real Client IP (for geolocation purposes)?
Edit: To be more clear, I am aware of the ways of accessing the client IP from inside Rails, however all of these solutions are getting me the internal Kubernetes IP, I believe the GCE network load balancer is not configured (or perhaps unable) to send the real client IP.
A Googler's answer to another version of my question verifies what I am trying to do is not currently possible with the Google Container Engine Network Load Balancer currently.
EDIT (May 31, 2017): as of Kubernetes v1.5 and up this is possible on GKE with the beta annotation service.beta.kubernetes.io/external-traffic. This was answered on SO here. Please note when I added the annotation the health checks were not created on the existing nodes. Recreating the LB and restarting the nodes solved the issue.
It seems as though this is not a rails problem at all, but one of GCE. You can try the first part of
request.env["HTTP_X_FORWARDED_FOR"]
Explanation
Getting Orgin IP From Load Balancer advises that https://cloud.google.com/compute/docs/load-balancing/http/ has the text
The proxies set HTTP request/response headers as follows:
Via: 1.1 google (requests and responses)
X-Forwarded-Proto: [http | https] (requests only)
X-Forwarded-For: <client IP(s)>, <global forwarding rule external IP> (requests only)
Can be a comma-separated list of IP addresses depending on the X-Forwarded-For entries appended by the intermediaries the client is
traveling through. The first element in the section
shows the origin address.
X-Cloud-Trace-Context: <trace-id>/<span-id>;<trace-options> (requests only)
Parameters for Stackdriver Trace.