Ethereum private blockchain: peers can not see each others via internet - ethereum

this is maybe a common question with many answers on internet but i can not make it work, so please help to give some hints
i setup the Ethereum private blockchain with the steps at https://github.com/ethereum/go-ethereum/wiki/Connecting-to-the-network
the first node has been started with basic parameter like :
geth --datadir "firstNode" --identity "firstNode" --networkid 65535000 --rpc --nat "any" --rpccorsdomain "*" console
the second pc in local network has been started with bootNodes of the 1st node enode address
even in the same network, these PC did not see each others with admin.peers cli,
instead, i see other enode of outside internet
when i try to connect my 3rd PC ( my laptop) from internet to my private block chain that is same networkid (say 65535000)
i supposed that i just need one 24/7 running node as bootnode so that other PC can start with that bootnode the the peers will automatically find out themselves as many document said. but in really i can not form up a private chain for testing, i tried many solutions but the issue still there, what i'm mining now is others blocks from internet , not my blocks
is there any option i'm missing ?
checked:
- my internet router open the uPNP by default
- the two pc need to addPeer(enode address) manually to see each others, without manual peering, they could not find themselves even they are boot up with the bootnodes from geth cli at startup
- from outside, i'm trying to peer with my local private chain by connecting to the 1st node with the public IP (check on router) on enode address, but doesn't help
I'm so confusing now and dont know what exactly to look up
Thanks for your help

Which tool are you using to build this application?
I personally faced this issue on IBM Bluemix , was not able to discover peers via internet but in bluemix I found this solution
On the Blockchain dashboard under the Network tab there will be action buttons on the right side of the panel for each validating peer. If the peer is stopped one of the actions will be start. Select that button. If you are using Hyperfabric then you should consider similar settings.

Related

An Internal error occured error code 0x4 remote desktop connecting to google cloud Compute Engine VM

When tyring to login to RDP the "old" remote dekstop connection gave "an internal error occured"
And the new modern UI remote dekstop from windows store with version 10.2.1810.0 gave: error code 0x4 remote desktop
It seems a colleage has been logged in with wierd screen size. How can I resolve this without rebooting the machine?
I found a solution.
In the new GUI untick:
"Uppdatera fjärrsessionens upplösning vid storleksändring" in Swedish
Which translates to: Update the resolution of the remote session when resizing
Update remote sessions resolution when size change
Actually, even moving the port off 3389 doesn't help (for long)
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TerminalServer\WinStations\RDP-Tcp\PortNumber
if there is an open port allowing RDP they will find it eventually and you will need to implement one of the above programs noted by Daniel. That was exactly my issue as well. You can usually tell if the problem is intermittent since it's just luck to get by the constant pounding on the open port...
For me, it was the graphic choice. No idea why but as I swap between the 4 options, only one works
I needed to set it to Highest Quality (32 bit)
However, I then restarted the server, and it no longer worked but True Colour (24 bit) did work! So, hopefully by toggling through each will get you through
In my case the cause was AVG Firewall blocking some RDP connections. I had to configure AVG Remote Access Module to allow RDP connections from some known IP addresses.
Hope this helps someone.
Looks like this error code pop up for many things... from screen resolution resize to colour depth to firewall and more... Which is quite odd. You have to check what is your specific case.
In my case, when I had problems with error code 0x4, it was related to unprotected RDP port.
In my case, it was caused by open, unprotected, RDP port 3389. As many would guess, this is a highly targeted port by bots. If your port is open to anyone in the internet, it's just a matter of time that your server or computer will be targeted.
The best solution would be to only allow connections from trusted IP addresses, the ones you use for connecting to your server.
Of course, that can't always be possible, so another solution would be something like the fail2ban utility used on many Linux servers.
The two solution I've found are EvlWatcher which is free and open source, and IPBan that have a free and open source version, but also a paid version.
You only need one of them, as they do the same thing. Do not install both. They will scan your logs and will temporary or permanently block any IP address with repeated fail connections. I suggest you always have your main IP address whitelisted, so you don't lock yourself out.
Best regards to you all.

Encountered problem while integrating devstack - osm (open source mano)

I'm currently trying to develop a cloud in my pc using virtual box. The idea is that I have 2 virtual machines, one which devstack installed (all in one) and the other with osm mano. Right now both have everything installed. Hence, I can log in to mano via user and password 'admin' as well as to devstack.
Current properties:
VM1 (devstack): IP (enp0s8) -> 192.168.56.101
Login to 192.168.56.101 -> correct
VM2 (mano): IP (enp0s8) -> 192.168.56.105
Login to 192.168.56.105 -> correct
As some of you may guess, I have 2 network interfaces in every vm, the first one being NAT (enp0s3 with 10.0.2.15 IP) and the second one being Host Only (192.168.56.x according to virtual box).
Needless to say, I can ping from one virtual machine to another without any problem.
Now, in the past I've being using devstack (ubuntu 18.04) in order to play with it a little bit, learn how to deploy instances, create groups and so on. Indeed, I developed a topology with an instance as a router and nagios as the monitoring tool system. It worked and I learnt a lot!
Anyway, what I want in this case is starting from scratch (scratch meaning having downloaded mano and devstack but without going further). So here I am, trying to integrate OSM with Devstack, making use of osm-vim command as it is:
osm vim-create --name openstack-site --user admin --password my_openstack_password --auth_url http://192.168.56.101:5000/v3 --tenant admin --account_type openstack
In this case, my openrc file (downloaded from horizon) resulted in my auth_url being:
export OS_AUTH_URL=http://192.168.56.101:5000/v3
What I'm trying to get my head into is how it's possible that this doesn't work, as whenever I log-in to mano web interface (after osm-vim command) I go to VIM accounts and operational state equals to "error".
Any kind of help would be much appreciated, as I've being struggling for a week now.
Thanks in advance!
I had the same problem. At the beginning I thought It was a network problem, but finally I found out It was due to a SSL problem. The most easy solution is to put a specific flag to avoid the SSL verification until the developers fix it. "--config '{insecure: True}'"
I also encountered this problem when I finished installing OSM-10 and OpenStack-Ussuri for Ubuntu18.04 some days ago. I solved this problem by change the url "--auth_url http//:192.168.23.18:5000/v3" to "-- auth_url http//:controller:5000/v3" and put "192.168.23.18 controller" in the ro container "/etc/hosts". The "controller" here is the host name where you install your openstack and which is used is your keystone authentication urls. Maybe you also have solved this problem but this problem is so troublesome and I hope more people do not be annoyed at this~

"Route not admitted by a router" on Openshift Online v3

I have an application deployed on Openshift Online v3 starter plan which (used to) run well until yesterday. Yesterday I had to publish a new version of my application. Apparently, the platform encountered some problems redeploying it, I had to cancel some processes which seemed locked or continually restarting.
Finally I managed to have my pod running with the new version, the logs look fine.
The issue now is that my app is no longer exposed. When hitting the URL which was assigned to me, I got the infamous "not available" OO page:
Application is not available
The application is currently not serving requests at this endpoint. It may not have been started or is still starting.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
I checked these 3 suggestions, and got sure that my host existed, that the path was correct, and that my pods where up. So, not understanding what the real issue was, I dropped the existing route and created a new one.
It's been 2 hours now, and the route UI keeps displaying this message:
The route is not accepting traffic yet because it has not been admitted by a router.
My understanding is that the router which should admit my route is not part of my project, it is managed by Openshift Online, am I right ?
So what could I do now to unlock my new route ?
Thanks for your suggestions
There it is ! My application is reachable again at last : It took 2 days for the router to setup my route. No action required from my part.
But the starter platform is still experiencing difficulties, so I will avoid any redeployment until the status turns green again.

Openshift 3 Online Starter & Routing

I have a starter (free) tier account with Openshift online. I have an application consisting of two pods, a Node and a Mongo. The pods build and deploy; from the terminal that executes in the web console on the running Node pod I can run curl localhost:8080 and the Node process obligingly spits back my base page.
I have a route that was autogenerated; the web console gives me a link to <myappname>.stuff.starter-east-1.openshiftapps.com and appears to correctly reference the Node service that sits on top of the running Node pod.
However, when I point my browser at that hostname, I get the Openshift error page that tells me that either the route or path was not typed correctly, or the pod isn't running.
I have tried this with my own code and with the example node packages and I see the same thing.
When I use the oc tool to query things about my application, I see that I don't have a router resource - but the route claims to have been exposed on a router. So I think I'm using some kind of default router in the node, and I don't have to launch one in my project, but I'm not sure. Most of the other questions around this topic are for people using the Enterprise product and running on their own hardware, where they have more control at the admin layer over the router package; all the suggestions seem to imply that for the Online product this 'just works'. Any ideas what I am missing?
Update : After some period of time, the example project did work and a browser request was serviced with the basic example page. Looking at the two setups I cannot see any differences, or why my route (in my custome app) never gets activated but the sample project route does.
Turns out the issue was that my application (node) was listening to localhost:8080, when it needs to be listening to 0.0.0.0:8080. I'm not enough of a networking guy to explain why that matters to the router but it does.

trouble with hg serve on the lan

Our team started using mercurial about a month ago and it was a rough start, but it's working out well now. At the end of last week though, we suddenly had issues pulling from each other's repositories.
Normally, I would pull from, for example, prog12:800, and it would work great. Now, I get the message
URLError: [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
The hg server is running, and it's not a firewall issue. This issue only occurs when trying to access my repo and two other people's. Accessing everyone else's, and the one on our webdev server, is fine. We are all on the same lan (though two of us connect via vpn) We all have the same issue - from my own computer, i can type in my computer name:8000 and it works, but no one else can see it.
I appreciate any suggestions!
It it possible your IT department deployed something that's acting as a firewall on each machine? Being able to connect to your own port 8000, but not others' just screams firewalls.
That said, most people don't actually run hg serve on developer boxes. Instead you let each developer freely create repos on the "central" "webdev" box. So I might create 'work-in-progress-ry4an' and do push/pull from there, and other can pull from it.
The hg serve functionality is a great way to pass someone some quick changesets, but not built to be used as an always-on server.