Ok this shouldn't be this hard, I'm trying to run 2 nodes in an elasticsearch cluster and getting an exception when trying to start node-1(node-2 which is master is already started). Using elasticsearch v 5.0.0 for both instances
Exception: failed to send join request to master, reason RemoteTransportException can't add node found existing node with the same id but is a different node instance]
Node-1 config:
node.name: SANNNNN-1
network.host: 10.3.185.250
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Node-2 config:
node.name: SAN-2
network.host: 10.3.185.251
discovery.zen.ping.unicast.hosts: ["10.3.185.251:9300"]
Full Exception on node 2:
[INFO ][o.e.d.z.ZenDiscovery ] [SANNNNN-1] failed to send join request to master [{SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300}], reason [RemoteTransportException[[SAN-2][10.3.185.251:9300][internal:discovery/zen/join]]; nested: IllegalArgumentException[can't add node {SANNNNN-1}{DxExoYHHTu2-rFvuvQSuEg}{hP4gLRugRgWzSuNnEhGHSw}{10.3.185.250}{10.3.185.250:9300}, found existing node {SAN-2}{DxExoYHHTu2-rFvuvQSuEg}{OfYBe97HQCmcha63CFiYlQ}{10.3.185.251}{10.3.185.251:9300} with the same id but is a different node instance]; ]
Ok so the issue was copying the elasticsearch folder from one node to another over scp. Elasticsearch saves the node id in elasticsearch/data/ folder. Deleted the data folder on one node and restarted it. The cluster is up and running. Hope this saves someone the hassle.
Remove the directory <Elastic search home>/data and restart the ES node, this issue is due to elastic search storing id in this directory, and this is a common mistake when copying one working elastic search directory from one node to another.
after fixing the issue, check the cluster status like this:
curl -X GET "localhost:9200/_cluster/health"
works fine with elastic search 6 as well
I had the same issue after cloning a Data Node in Azure. I ended up finding the Data file by starting in the root folder:
/datadisks/disk1/elasticsearch/data
I kept reading that others found the folder elsewhere and wanted to share here.
Related
I'm trying to get gliderlabs registrator running on Bluemix, but I'm having issues as the container won't start with
O400 The plain HTTP request was sent to HTTPS port
What I think is happening is that my docker host is running on tcp://containers-api.eu-gb.bluemix.net:8443 - so the docker rest api's are https. However I suspect the gliderlabs/registrator is using http by default.
So anyone got any ideas how to get this to work ?
Steve
Looking at that package, it uses the library github.com/fsouza/go-dockerclient to access the docker remote api, specifically the NewClientFromEnv() call. Per the readme for go-dockerclient, it should pick up the env vars for https if they're there - i.e. make sure you're exporting all three env vars: DOCKER_HOST, DOCKER_TLS_VERIFY, DOCKER_CERT_PATH.
Another possibility - per reading the comments about registrator - you may wish to check that you're using gliderlabs/registrator:master instead of gliderlabs/registrator:latest. Just pulled both to check, and "latest" is 14 months old, vs 6 days for "master".
For testing, I want to be able to run several IPFS nodes on a single machine.
This is the scenario:
I am building small services on top of IPFS core library, following the Making your own IPFS service guide. When I try to put client and server on the same machine (note that each of them will create their own IPFS node), I will get the following:
panic: cannot acquire lock: Lock FcntlFlock of /Users/long/.ipfs/repo.lock failed: resource temporarily unavailable
Usually, when you start with IPFS, you will use ipfs init, which will create a new node. The default data and config stored for that particular node are located at ~/.ipfs. Here is how you can create a new node and config it so it can run besides your default node.
1. Create a new node
For a new node you have to use ipfs init again. Use for instance the following:
IPFS_PATH=~/.ipfs2 ipfs init
This will create a new node at ~/.ipfs2 (not using the default path).
2. Change Address Configs
As both of your nodes now bind to the same ports, you need to change the port configuration, so both nodes can run side by side. For this, open ~/.ipfs2/configand findAddresses`:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
}
To for example the following:
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5002",
"Gateway": "/ip4/127.0.0.1/tcp/8081",
"Swarm": [
"/ip4/0.0.0.0/tcp/4002",
"/ip6/::/tcp/4002"
]
}
With this, you should be able to run both node .ipfs and .ipfs2 on a single machine.
Notes:
Whenever you use .ipfs2, you need to set the env variable IPFS_PATH=~/.ipfs2
In your example you need to change either your client or server node from ~/.ipfs to ~/.ipfs2
you can also start the daemon on the second node using IPFS_PATH=~/.ipfs2 ipfs daemon &
Hello, I use ipfs2, after running two daemons at the same time, can indeed open localhost:5001 / webui, run the second localhost:5002 / webui has an error, as shown in the attachment
Here are some ways I've used to create multiple nodes/peers ids.
I use windows 10.
1st node go-ipfs (latest version)
2nd node Siderus Orion ifps (connect to Orion node , not local) -- https://orion.siderus.io/
Use VirtualBox to run a minimal ubuntu installation. (You can set up as many as you want)
Repeat the process and you have 4 nodes or as many as you want.
https://discuss.ipfs.io/t/ipfs-manager-download-install-manage-debug-your-ipfs-node/3534 is another gui that installs and lets you manage all ipfs commands without CMD. He just released it a few days ago and it looks well worth lots of reviews.
Disclaimer I am not a coder or computer professional. Just a huge fan of IPFS! I hope we can raise awareness and change the world.
In moqui, I am trying to configure to use mysql, commented out derby and uncommented mysql in defaultconf, I copied the connector to framework lib, included the dependency in framework build.gradle, on running load, I get this error - java.lang.reflect.InvocationTargetExceptionjavax.management.InstanceAlreadyExistsException: bitronix.tm:type=JDBC,UniqueName=DEFAULT_transactional_DS,Id=0 -- thanks for any help
Can you post a snippet of code you have modified in MoquiDefaultConf.xml and build.graddle file.
A viable alternative to configure MySQL with Moqui is by doing related setting in configuration files (i.e. MoquiDevConf.xml for development instance, MoquiStagingConf.xml for staging instance and MoquiProductionConf.xml for production instance.). Follow the steps below to configure MySQL with Moqui.
Since, May be you are trying to do some development, you need to make changes in MoquiDevConf.xml file only.
Replace the <entity-facade> code in MoquiDevConf.xml with the following code.
<entity-facade crypt-pass="MoquiDefaultPassword:CHANGEME">
<datasource group-name="transactional" database-conf-name="mysql" schema-name="">
<inline-jdbc jdbc-uri="jdbc:mysql://127.0.0.1:3306/MoquiTransactional?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8"
jdbc-username="MYSQL_USER_NAME" jdbc-password="MYSQL_PASSWORD" pool-minsize="2" pool-maxsize="50"/>
</datasource>
</entity-facade>
In the code above 'MoquiDEFAULT' is the name of database. Replace the MYSQL_USER_NAME and MYSQL_PASSWORD with your MySQL username and password.
Create a database in MySQL (as per the code above, create the database with name MoquiTransactional).
Add the jdbc driver for MySQL in the runtime/lib directory.
In MoquiInit.properties file, set MoquiDevConf.xml file path to "moqui.conf" property i.e. moqui.conf=conf/MoquiDevConf.xml
Now just simply build, load and run.
To answer your question for loading seed data,
you can simply the run the gradle command gradle load -Ptypes=seed, this only loads the seed type data.
Without more details my best guess is that you have another instance of Bitronix running on the machine, by the UniqueName almost certainly another instance of Moqui running. Make sure no other instance is running, killing background processes if there are any, before starting your new instance.
I need to setup a hadoop/hdfs cluster with one namenode and two datanodes. I am aware of conf/slaves file which lists the machines datanodes are running. But how can I specify where hadoop/hdfs is locally installed on slave node? Also the user account to start hdfs there?
Edit: in log files, I find following error, when I tried to start-dfs.sh
ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///
The user is expected to be the same as on the master node. The location of the actual data can be modified by changing the dfs.data.dir node inhadoop-site.xml.
I've created this config for redis [/etc/redis/map.conf]:
include /etc/redis/ideal.conf
port 11235
pidfile /var/run/redis-map.pid
logfile /var/log/redis/map.log
dbfilename map.rdb
As you can see, it includes /etc/redis/ideal.conf; this file actually exists and we have read permissions.
Also there is another file, slightly different; consider [/etc/redis/storage.conf]:
include /etc/redis/ideal.conf
pidfile /var/run/redis-storage.pid
port 8000
bind 192.168.0.3
logfile /var/log/redis/storage.log
dbfilename dump_storage.rdb
My problem is: I can launch redis-server with storage.conf (and everything works fine), but map.conf leads to the following error:
Reading the configuration file, at line 1
>>> 'include /etc/redis/ideal.conf'
Bad directive or wrong number of arguments
failed
Version of redis is 2.2.
Where did I go wrong?
Sorry guys.
I was using different instances of Redis.
Instance for storage.conf was launched by /usr/local/bin/redis-server, but map.conf launched by /usr/bin/redis-server; second one is broken.
Thank you anyway.