introduction
when configuring elasticsearch I ran into a problem with binding the
listening interfaces.
somehow the documentation does not provide how to setup multiple network interfaces (network def and bind def)
problem description
my intention is to setup the network.bind_host as _eth1:ipv4_ and _local_
even when trying to setup the bind_host as _local_ only,
the elastic search port 9200 is still only reachable by eth1 (of course i have restarted the server)
solutions tried
i have tested the firewall configuration by setting up a netcat server and this one works perfectly for that port
so this results in 2 Questions:
how to configure multiple nics? (whats the notation?)
would i require to change the network.publish_host ?!
.
any other pointers?
current configuration:
network.bind_host: _eth1:ipv4_
network.publish_host: _eth1:ipv4_
network.host: _eth1:ipv4_
also tested configuration:
network.bind_host: _local_
network.publish_host: _eth1:ipv4_
network.host: _local_
PS:
afaik the publish_host is the nic for the inter-server communication
Using a YAML list for the desired property:
network.bind_host:
- _local_
- _en0:ipv4_
If I understand this answer correctly, publish_host should be _eth1:ipv4_. Your publish_host has to be a one of the interfaces to which elasticsearch binds via the bind_host property.
The above linked answer is actually great, so I have to cite it here:
"bind_host" is the host that an Elasticsearch node uses in the socket
bind call when starting the network. Due to socket programming model,
you can "bind" to an address. By referencing an "address", the socket
allows access to one or all underlying network devices. There are
several addresses with predefined semantics, e.g. 0.0.0.0 is reserved
for "bind to all network devices". So the "bind_host" address does not
necessarily reflect a single unique address.
"publish_host" must be a single unique network address. It is used for
connect calls by other nodes, not for socket bind call by the node
itself. By using "publish_host" all nodes and clients can be sure they
can connect to this node. Declaring this single unique address to the
outside can be interpreted as "publishing", so it is called
"publish_host".
You can not set "bind_host" and "publish_host" to arbitrary values,
the values must adhere to the underlying socket model.
Related
There is openshift-origin cluster version 3.11. (upgraded from 3.9)
I want to add two new nodes to cluster.
Node Hosts created in openstack project with nat, and use internal network class C (192.168.xxx.xxx), also there are floating ip attached to hosts
There are dns records which resolve fqdn of hosts to floating ips and back.
Scaleup playbook works fine but new nodes appear in cluster with their internal ips and thus nothing works.
In openshift v3.9 and earlier i used in my inventory variable
openshift_set_node_ip = true
and point openshift_ip for adding node.
Now it doesn't work.
What should i use instead of openshift_set_node_ip?
I had a similar problem I solved after reading https://stackoverflow.com/a/29496135 where Kashyap explain how to change the ansible_default_ipv4 fact used to guess the IP address to use.
This variable is created testing a call to 8.8.8.8 (https://github.com/ansible/ansible/blob/e41f1a4d7d8d7331bd338a62dcd880ffe27fc8ea/lib/ansible/module_utils/facts/network/linux.py#L64). You can then add a specific route to 8.8.8.8 to change the ansible_default_ipv4 fact result:
sudo ip r add 8.8.8.8 via YOUR_RIGHT_GATEWAY
Maybe it could help to solve your case.
I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/
I am using c to communicate with mysql
it uses mysql_real_connect() to connect to DB engine.
I am just curious about knowing "why this function require socket name and port number both?".
Can we not use only port number to communicate with mysql.
I googled for it but couldn't find any answer.
Sorry for such a childish question.
If you are using named pipes or domain sockets then the socket name specifies the pipe or socket name. Otherwise, you can just pass 0 as the name.
You don't, obviously. You supply one or the other, depending on the protocol chosen by the value of the 'host' parameter, as described in the document you cited.
I've got a trio of Windows servers (data1, data2 and datawitness) that aren't part of any domain and don't use AD. I'm trying to set up mirroring based on the instructions at http://alan328.com/SQL2005_Database_Mirroring_Tutorial.aspx. I've had success right up until the final set of instructions where I tell data1 to use datawitness as the witness server. That step fails with the following message:
alter database MyDatabase set witness = 'TCP://datawitness.somedomain.com:7024'
The ALTER DATABASE command could not be sent to the remote server instance 'TCP://datawitness.somedomain.com:7024'. The database mirroring configuration was not changed. Verify that the server is connected, and try again.
I've tested both port 7024 as well as 1433 using telnet and both servers can indeed connect with each other. I'm also able to add a connection to the witness server from SQL Server Manager on the primary server. I've used the Configuration Manager on both servers to enabled Named Pipes and verify that IP traffic is enabled and using port 1433 by default.
What else could it be? Do I need any additional ports open for this to work? (The firewall rules are very restrictive, but I know traffic on the previously mentioned ports is explicitly allowed)
Caveats that are worth mentioning here:
Each server is in a different network segment
The servers don't use AD and aren't part of a domain
There is no DNS server configured for these servers, so I'm using the HOSTS file to map domain names to IP addresses (verified using telnet, ping, etc).
The firewall rules are very restrictive and I don't have direct access to tweak them, though I can call in a change if needed
Data1 and Data2 are using SQL Server 2008, Datawitness is using SQL Express 2005. All of them use the default instance (i.e. none of them are named instances)
After combing through blogs and KB articles and forum posts and reinstalling and reconfiguring and rebooting and profiling, etc, etc, etc, I finally found the key to the puzzle - an entry in the event log on the witness server reported this error:
Database mirroring connection error 2 'DNS lookup failed with error: '11001(No such host is known.)'.' for 'TCP://ABC-WEB01:7024'.
I had used a hosts file to map mock domain names for all three servers in the form of datax.mydomain.com. However, it is now apparent that the witness was trying to comunicate back using the name of the primary server, which I did not have a hosts entry for. Simply adding another entry for ABC-WEB01 pointing to the primary web server did the trick. No errors and the mirroring is finally complete.
Hope this saves someone else a billion hours.
I'd like to add one more sub answer to this specific question, as my comment on Chris' answer shows, my mirror was showing up as disconnected (to the witness) Apperently you need to reboot (or in my case i just restarded the service) the witness server.
As soon as i did this the mirror showed the Witness connection as Connected!
See: http://www.bigresource.com/Tracker/Track-ms_sql-cBsxsUSH/
I'm curious if it is possible to map a UNIX socket on to an INET socket. The situation is simply that I'd like to connect to a MySQL server. Unfortunately it has INET sockets disabled and therefore I can only connect with UNIX sockets. The tools I'm using/writing have to connect on an INET socket, so I'm trying to see if I can map one on to the other.
It took a fair amount of searching but I did find socat, which purportedly does what I'm looking for. I was wondering if anyone has any suggestions on how to accomplish this. The command-line I've been using (with partial success) is:
socat -v UNIX-CONNECT:/var/lib/mysql/mysql.sock TCP-LISTEN:6666,reuseaddr
Now I can make connections and talk to the server. Unfortunately any attempts at making multiple connections fail as I need to use the fork option but this option seems to render the connections nonfunctional.
I know I can tackle the issue with Perl (my preferred language), but I'd rather avoid writing the entire implementation myself. I familiar with the IO::Socket libraries, I am simply hoping anyone has experience doing this sort of thing. Open to suggestions/ideas.
Thanks.
Reverse the order of your arguments to socat, and it works.
socat -v tcp-l:6666,reuseaddr,fork unix:/var/lib/mysql/mysql.sock
This instructs socat to
Listen on TCP port 6666 (with SO_REUSEADDR)
Wait to accept a connection
When a connection is made, fork. In the child, continue the steps below. In the parent, go to 2.
Open a UNIX domain connection to the /var/lib/mysql/mysql.sock socket.
Transfer data between the two endpoints, then exit.
Writing it the other way around
socat -v unix:/var/lib/mysql/mysql.sock tcp-l:6666,reuseaddr,fork
doesn't work, because this instructs socat to
Open a UNIX domain connection to the /var/lib/mysql/mysql.sock socket.
Listen on TCP port 6666 (with SO_REUSEADDR)
Wait to accept a connection
When a connection is made, spawn a worker child to transfer data between the two addresses.
The parent continues to accept connections on the second address, but no longer has the first address available: it was given to the first child. So nothing useful can be done from this point on.
Yes, you can do this in Perl.
Look at perlipc, IO::Select, IO::Socket and Beej's Guide to Network Programming.
You might want to consider doing it in POE - it's asynchronous library for dealing with events, so it looks like great for the task.
It is not 100% relevant, but I use POE to write proxy between stateless protocol (HTTP) and statefull protocol (telnet session, and more specifically - MUD session), and it was rather simple - You can check the code in here: http://www.depesz.com/index.php/2009/04/08/learning-poe-http-2-mud-proxy/.
In the comments somebody also suggested Coro/AnyEvent - I haven't played with it yet, but you might want to check it.