How to run several IPFS nodes on a single machine? - ipfs

For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable

Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &

Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment

Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.

Related

Hyperledger Composer CLI Ping to a Business Network returns AccessException

Im trying to learn Hyperledger Composer but seems to be a relatively new technology, i mean there are few tutorials and few solutions to a lot of questions, tutorial does not mention possible error case when following the commands and which means there are is also no solution for those errors.
I have joined the composer channel in their community chat, looks like its running in Discord or something, and asked the same question without a response, i have a better experience here in SO.
This is the problem: I have deployed my business network, installed it, started it, created my network admin card and imported it, then to test if everything is ok i have to command composer network ping --card NAME-OF-MY-ADMIN-CARD
And this error comes:
juan#JuanDeDios:~/proyectos/inovacion/a3-poliza-microservice$ composer network ping --card admin#a3-policy-microservice
Error: transaction returned with failure: AccessException: Participant 'org.hyperledger.composer.system.NetworkAdmin#admin' does not have 'READ' access to resource 'org.hyperledger.composer.system.Network#a3-policy-microservice#0.0.1'
Command failed
I think that it has to do something with the permission.acl file, and gave permission to everyone to everything so there would not be any restrictions to anyone, and tryied again, but failed.
So i thought i had to uninstall my business network and create it again, i deleted my .bna and my network.card files also so everything would be created again, but the same error result.
My other attempt was to update the business network, but didn't work, the same error happened and I'm sure i didn't miss any step from the tutorial. I do also followed the playground tutorial. What i have not done its to create another app with the Yeoman but i will do if i don't find a solution to this problem which would not require me to create another app.
This were my steps:
1-. Created my app with Yeoman
yo hyperledger-composer:businessnetwork
2-. Selected Apache-2.0 for my license
3-. Created a3-policy-microservice as the name of the business network
4-. Created org.microservice.policy (Yeah i switched names but Im totally aware)
5-. Generated my app with a template selecting the NO option
6-. Created my assets, participants and transactions
7-. Changed my permission rules to mine
8-. I generated the .bna file
composer archive create -t dir -n .
9-. Then installed my bna file
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-microservice#0.0.1.bna
10-. Then started my network and created my networkadmin card
composer network start --networkName a3-policy-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card PeerAdmin#hlfv1 --file networkadmin.card
11-. Imported my card
composer card import --file networkadmin.card
12-. Tried to ping my network
composer network ping --card admin#a3-poliza-microservice
And the error happens
Later i tried to create everything again shutting down my fabric and started it again and creating the network from the first step.
My other attempt was to change the permissions and upgrade my bna network, but it failed too. Im running out of options
Hope this description its not too long to ignore it. Thanks in advance
thanks for the question!
First possibility is that your network name is a3-policy-network but you're pinging a network called a3-poliza-microservice - once you do get the correct ACLs in place (currently, that's the error you're trying to resolve).
The procedure for upgrade would normally be the procedure below:
After your step 12 (where you can't ping the business network due to restrictive ACL conditions, assuming you are using the right network name) you would have:
Make the changes to to include your System ACLs this time eg.
/**
* Sample access control list.
*/
rule SystemACL {
description: "System ACL to permit all access"
participant: "org.hyperledger.composer.system.Participant"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
rule NetworkAdminUser {
description: "Grant business network administrators full access to user resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "**"
action: ALLOW
}
rule NetworkAdminSystem {
description: "Grant business network administrators full access to system resources"
participant: "org.hyperledger.composer.system.NetworkAdmin"
operation: ALL
resource: "org.hyperledger.composer.system.**"
action: ALLOW
}
Update the "version" field in your existing package.json in your Business Network project directory (ie need to change it next increment - eg. update the version property from 0.0.1 to 0.0.2.)
From the same directory, run the following command:
composer archive create --sourceType dir --sourceName . -a a3-policy-network#0.0.2.bna
Now install the new business network code firstly:
composer network install --card PeerAdmin#hlfv1 --archiveFile a3-policy-network#0.0.2.bna
Then perform the requisite upgrade step (single '-' for short form of the parameter):
composer network upgrade -c PeerAdmin#hlfv1 -n a3-policy-network -V 0.0.2
After a few seconds, ping the network again to see ACL changes are now in effect:
composer network ping -c a3-policy-network

Elasticsearch 5.0.0. cluster node not joining

Ok this shouldn't be this hard, I'm trying to run 2 nodes in an elasticsearch cluster and getting an exception when trying to start node-1(node-2 which is master is already started). Using elasticsearch v 5.0.0 for both instances
Exception: failed to send join request to master, reason RemoteTransportException can't add node found existing node with the same id but is a different node instance]
Node-1 config:
node.name: SANNNNN-1
network.host: 10.3.185.250
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Node-2 config:
node.name: SAN-2
network.host: 10.3.185.251
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Full Exception on node 2:
[INFO ][o.e.d.z.ZenDiscovery ] [SANNNNN-1] failed to send join request to master [{SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300}], reason [RemoteTransportException[[SAN-2][10.3.185.251:9300][internal:discovery/zen/join]]; nested: IllegalArgumentException[can't add node {SANNNNN-1}{DxExoYHHTu2-rFvuvQSuEg}{hP4gLRugRgWzSuNnEhGHSw}{10.3.185.250}{10.3.185.250:9300}, found existing node {SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300} with the same id but is a different node instance]; ]
Ok so the issue was copying the elasticsearch folder from one node to another over scp. Elasticsearch saves the node id in elasticsearch/data/ folder. Deleted the data folder on one node and restarted it. The cluster is up and running. Hope this saves someone the hassle.
Remove the directory <Elastic search home>/data and restart the ES node, this issue is due to elastic search storing id in this directory, and this is a common mistake when copying one working elastic search directory from one node to another.
after fixing the issue, check the cluster status like this:
curl -X GET "localhost:9200/_cluster/health"
works fine with elastic search 6 as well
I had the same issue after cloning a Data Node in Azure. I ended up finding the Data file by starting in the root folder:
/datadisks/disk1/elasticsearch/data
I kept reading that others found the folder elsewhere and wanted to share here.

Questions on starting Locator using snappydata/bin> ./spark-shell.sh script

Spark v. 0.5
Here's the command I used to start a Locator:
ubuntu#ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start
Starting SnappyData Locator using peer discovery on:
0.0.0.0[10334] Starting DRDA server for SnappyData at address localhost/127.0.0.1[1527]
Logs generated in /snappydata-0.5-bin/bin/snappylocator.log
SnappyData Locator pid: 9352 status: running
It looks like it starts the DRDA server locally, with no outside interface for a client to connect to. So, I cannot reach my SnappyData Locator using this JDBC URL from an outside client host (e.g. my SquirrelSQL editor).
This does not connect:
jdbc:snappydata://MY-AWS-PUBLIC-IP-HERE:1527/
What property do I pass my ./snappy-shell.sh location start command to get the DRDA Server to start on a public IP address instead of "localhost/127.0.0.1"?
Use -client-bind-address and -client-port options. For locator also use the -peer-discovery-address and -peer-discovery-port options to specify bind address for other locators/servers/leads (that are passed to their -locators=<address>:<port>):
snappy-shell locator start -peer-discovery-address=<internal IP for peers> -client-bind-address=<public IP for clients>
See the output of snappy-shell locator --help for commonly used options.
For SnappyData releases, you may find it much easier to use the global configuration for all of the locators, servers, leads. Check configuring the cluster.
This will allow specifying all options for all JVMs of the cluster in conf/locators, conf/leads, conf/servers then starting with snappy-start-all.sh, status with snappy-status-all.sh and stop all with snappy-stop-all.sh
On a related note, we at SnappyData Inc., are developing scripts to enable users quickly launch SnappyData cluster on AWS.
If you want to try it out, below steps would guide you. We would love to hear your feedback on this.
Download its development branch git clone https://github.com/SnappyDataInc/snappydata.git -b SNAP-864 (You don't need to clone the repo for this, but I could not find a way to attach the scripts here.)
Go to ec2 directory cd snappydata/cluster/ec2
Run snappy-ec2. ./snappy-ec2 -k ec2-keypair-name -i /path/to/keypair/private/key/file launch your-cluster-name
See this README for more details.

Restarting a MySQL server managed by Ambari

I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!

where to specify the root directory of hadoop on slave node?

I need to setup a hadoop/hdfs cluster with one namenode and two datanodes. I am aware of conf/slaves file which lists the machines datanodes are running. But how can I specify where hadoop/hdfs is locally installed on slave node? Also the user account to start hdfs there?
Edit: in log files, I find following error, when I tried to start-dfs.sh
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
The user is expected to be the same as on the master node. The location of the actual data can be modified by changing the dfs.data.dir node inhadoop-site.xml.