MarkLogic Cluster - Add data in 1st host & update in 2nd host throws error - updates

MarkLogic setup is as follows
3 hosts
Data confniguration
- 1 master forest on each host
- 1 replica for each host on different host
We have MarkLogic cluster (3 hosts) with failover) deployed on Azure VMs
We are using MarkLogic ContentPump (MLCP) to ingest data into MarkLogic
This is what we have implemented
Installed Java on 1st host
Copied MLCP tool
Ingested data by providing 1st server as host parameter
Now we got batch of xmls to update back to MarkLogic
With failover implementation, due to some reason 1st host is not available, so when i tried to ingest data thru 2nd host, i started getting error that record was ingested in different host, so update can't happen from here.
So i would like to know the best practices to be followed for ingestion process

To enable the system to reliably failover, you will also need to setup replicas for the Security, App Services & any other system database you may be using as part of your architecture.
The reason you are unable to connect to the other hosts is that the Security database is on host 1, so you are unable to authenticate. Once that is configured for failover, you should no longer run into those issues.
The documentation covers that setup here:
https://docs.marklogic.com/guide/cluster/config-both-failover#id_57935

Related

Error Connect MySQL Communications link failure The last packet sent successfully to the server was 0 milliseconds ago

I am trying to connect to a MySQL database from Data Fusion, but I am getting the following error. Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. The database is accessed through public IP through port 3306, from my machine I can connect perfectly, but from Data Fusion I cannot.
As John Hanley pointed out in his comment, it's probably due to a connectivity issue with your SQL instance.
A possible reason would be that you have not enabled your instance to be connected via its public IP. If that's the case, go to your SQL instance and edit its configuration, adding a network (if you haven't done so previously) and providing an IP range to include your Data Fusion instance. Keep in mind that if you configure your instance to accept connections using its public IP address, also configure it to use SSL to keep your data secure. If that was the issue, now you should be able to connect properly.
Also, be sure to check that the Dataproc cluster that your data Fusion instance is using under the hood has the proper configuration (you shouldn't worry about this if you haven't changed anything about the Dataproc cluster).
This is the best advice I can give without further details. If this doesn't work for you, we're going to need more information.
My two cents:
I use one windows pc with several linux box at home.
My main station is the Windows one.
I'v been using Sqlyog just to prob tables on the remote db.
I use my class C addr. to address the server.
It works well. By design I want to access the db server
from anywhere in my home...
60% to 75%ishhh problems of that kind would be related
to the .cnt file configuration.
Regards
Steph

Send data to a MySQL server over an internet connection

I'm a total beginner to MySQL, I'm more of a firmware specialist. I'm working on an application where I will be getting GPS coordinates from a microcontroller + cellular device and I would like some way to store the coordinates and do processing on them. I figured a database hosted on a server made the most sense, which is what has brought me to MySQL.
Basically, I'm wondering what the basic protocol is for sending data to a MySQL server over an internet connection (my device has data). Like how do I connect to the server and publish data to it?
I'm experienced with MQTT and I think I could do TCP as well but I'm looking for a protocol that is not super power-intensive and I can't use anything that requires an operating system, like a python script.
To be clear, I am NOT asking you to tell me every step for how this is done, but basically what protocol and what tools could I use? Anything you can tell me would be appreciated.
I was thinking that I could use the MySQL client C code to help write a driver that could allow me to connect to the server. I'm experienced with writing drivers and the microcontroller I'm using uses C.
You need no direct connection to the DB at all. Your cellular device should be able to establish tcp connection to the ipaddress/port and to send the byte-stream through the connection. It can be the dumb unidirectional protocol with losses.
You need some service that can listen on the other side, that can parse your byte-stream, can fetch the correct packets from it and then send the data to the database. Speaking frankly that service can even be written in linux shell:
nc -lk 1234 | collector.sh
where collector.sh is a script like that:
#!/bin/sh
while read LINE
do
# $LINE parsing and all the staff
mysql -e "INSERT INTO mygps.nmea (lat,lon,dtime) VALUES ($LAT, $LON, $DTIME);"
done <<< /dev/stdin
####
Sure it isn't a best solution but it was really helpful for me at the very beginning. Then you can proceed the gathered data in any desired way.
Build a simple server that communicates with whatever gathered data and then use the server so send the data to MySQL with the help of MySQL connector. Building part of the protocol will quite time consuming. - nbk
If you "can't use anything that requires an operating system" you need some middleware that can run the MySQL client driver to talk to the database, you will then use MQTT to pass data between your sensor and the middleware. If you don't want to write this middleware yourself, something like Node-RED might come handy.
You certainly can reimplement the driver for your MC, though I personally would not want to waste the time on something like this when I can assemble a solution from existing components. Database protocols are typically chatty, synchronous, and sensitive to network quality, and I wouldn't want to waste my MC cycles on that when I can make middleware do that asynchronously. - mustaccio
Simply "reverse ssh port forwarding"? That can be done, I think, with a single ssh command at one (or both) end of the connection. MySQL, by default, needs the client to connect on port 3306 to the server. - rick-james

ECS EC2 Launch Type: Service database connection string

I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/

Connecting node to a cluster in Couchbase

I have created a bucket in my local system and I am trying to connect another node which is located in a remote server. I am able to work with the nodes separately. But I need to join these two nodes to form a cluster. Is there a way to add the remote server node into my local server by using the web UI?
When I tried to add the remote server's IP address by clicking "Add Server", I am getting the following error.
"Attention - Prepare join failed. Authentication failed. Verify username and password. Got HTTP status 401 from REST call post to http://XXX.XXX.XXX.XXX:8091/engageCluster2. Body was: []"
I used my local server's username and password. If I give that server's username and password, I get this error.
Attention - This node cannot add another node ('ns_1#XXX.XXX.XXX.XXX') because of cluster version compatibility mismatch. Cluster works in [4, 1] mode and node only supports [2, 0].
Is there a way to link them using Java API? Can someone please help me with this?

Spymemcached client auto-reconnect to another server in Couchbase Cluster?

I read Couchbase Rebalancing document (http://blog.couchbase.com/rebalancing-couchbase-part-i) and it wrote : "A client losing its connection to the cluster will attempt to reestablish (configurable). Anytime it reconnects (first time or not) it gets the latest map that the cluster has. Ironically, a flaky network in theory might just help here to keep the map constantly updated during a rebalance, but that's for a different discussion."
I use Spymemcached 2.7.3 and how can i achieve that.
I give an example: My Java client add two server (10.0.0.40 and 10.0.0.15, use URL) to connect to Couchbase cluster. But in reality, when 10.0.0.40 down, the persistent connection did not keep. I have to restart my client to switch to 10.0.0.15. How can my client can re-connect to 10.0.0.15 when 10.0.0.40 down without restart my application.
Updated:
I use below code to connect to Couchbase cluster:
ArrayList<URI> listAddr = new ArrayList<>();
listAddr.add(new URI("http://10.0.0.40:8091/pools"));
listAddr.add(new URI("http://10.0.0.15:8091/pools"));
listAddr.add(new URI("http://10.0.0.16:8091/pools"));
client = new MemcachedClient(new BinaryConnectionFactory(), listAddr, "test", "test", "");
I want to my java client auto reconnect to another server in pool (40,15,16) to get topology (when my java client's still running) if the first server in pool (40) failed.
Can i achieve this purpose with spymemcahce or i have to move to Couchbase Java SDK.
spymemcached java client dos not handle membase fail over for particular node.
You can check here .
If you update your java client to couchbase java client, then you can handle fail over by removing failed node from cluster.
for more information you can check here or here