I have a small hadoop/hive cluster (6 nodes in total).
Using "hadoop dfsadmin -report" I see that are datanodes are working well and connected.
Additionally when I add data in a hive table I can see that the data are being distributed
all over the node. (Easy to check, as the disk space used increases).
I am trying to create some indexes on one table. From the jobtracker http interface, I see only one node available. I tried to run multiple queries ( I use mysql for the metadata) but they appear to run only on the node that hive is installed.
Basically My question is how to make the jobtracker to utilize the other nodes as well.
From what you tell it looks that:
Datanodes are properly running on all nodes and able to communicate with namenode.
Task trackers are not running on all nodes except of one, or, are not able to communicate with the job tracker for some reason.
After checking that task trackers indeed running - read their logs to find out what is their problem to communicate with JobTracker.
Related
I've set up a couchbase cluster with 2 nodes containing 300k docs on 4 buckets. the option replicas is forced to 1 as there are only 2 machines.
But documents are splitted half in one node half in the other, I need to have double copy of each document so if a node goes down the other one che still supply all data to my app.
Is there a setting I missed in creating the cluster?
can I still set the cluster to replicate all documents?
I hope someone can help.
thanks
PS: I'm using couchbase community 4.5
UPDATE:
I add screenshots of cluster web interface and cbstast output:
the following is the state with one node only
next the one with both node up:
then cbstats results on both node when both are up and running:
AS you can see with only one node there are half items displayed. Does it mean that the other half resides as replicas but are not shown???
can I still run consistenly my app with only one node???
UPDATE:
I had to click fail-over manually to see replicas become active on the remaining node. As with just two cluster auto fail-over is disabled!!!
Couchbase Server will partition or shard the documents across the two nodes, as you observed. It will also place replicas on those nodes, based on your one-replica configuration.
To access a replica, you must use one of the Client SDKs.
For example, this Java code will attempt to retrieve a replica (getFromReplica("id", ReplicaMode.ALL)) if the active document retrieval fails (get("id")).
bucket.async()
.get("id")
.onErrorResumeNext(bucket.async().getFromReplica("id", ReplicaMode.ALL))
.subscribe();
The ReplicaMode.ALL tells Couchbase to try all nodes with replicas and the active node.
So what was happening with only two nodes in the cluster was that auto fail-over didn't start automatically as specified here:
https://developer.couchbase.com/documentation/server/current/clustersetup/automatic-failover.html
this means data replicas where not activated in the remaining node unless fail-over was triggerd manullay.
The best thing is to have more than TWO nodes in the cluster before going in production.
To be honest I should have ridden documentation very carefully before asking any question.
thanks Jeff Kurtz for your help, you pushed me towards the solution. (the understanding of how couchbase replicas policy works).
I'm considering to use Amazon MySQL RDS with Read Replicas. The only thing disturbing me is Replica Lag and eventual inconsistency. For example, image the case when user modifies his profile (UPDATE will be performed on main DB instance) and then refreshes the page to see changed info (SELECT might be performed from Replica which has not received changes yet due to Replica Lag).
By accident, I found Amazon article which mentions its possible to perform targeted queries. For me it sounds like we can add some parameter or other to tell Amazon to execute select on the main DB instance instead of on Replica. The example with user profile is quite trivial but the same problem occurs in more realistic cases, for example checkout, when a user performs several steps and he needs to see updated info on then next screens. Yes, application could cache entire data set on its own, however it would be great if anybody knows how to perform targeted queries on main DB instance.
I read the link you referenced and didn't find any mention of "target" or anything like that.
But this line might be what you're referring to:
Otherwise, you should spread out the load and read from one of the
Read Replicas. You can make this decision on a query-by-query basis
within your application. You will probably want to maintain some sort
of registry of available Read Replicas within your application,
choosing from among them on a round-robin or randomly distributed
basis.
If so, then I interpret that line to suggest that you can balance reads in your application by just picking one server from a pool and hitting that one. But it would be all in your application logic.
Im new to clustering and Im doing a project on cluster database. I want to make use of MySQL Cluster. Im using it for a small scale database and this is my plan:
5 node:
1 management node,
2 SQL node,
2 API node.
My questions are:
1) Is my plan for the node process alright?
2) What should I do when I got the error "Failed to allocate node id..."?
3) Is it a requirement to use multi-threaded data node?
4) Where do I place my web server page for the user to access the database?
Please reply. Thank you so much.
This answer might be a little late but can be helpful for someone else getting started:
1) Is my plan for the node process alright?
Your plan for the node process is ok for a small cluster. I would recommend adding an additional management node and 2 more Data Nodes if the number of replicas you have configured is 2. Reason being since you currently have only 2 data nodes, your cluster will not be functionally should once of those nodes "die" . This is because of the two-phase commit that takes place. In the case of a faiure only 1 data node will be able to persist the data , the other one would be unreachable and therefore the transaction would be marked as incomplete.
2) What should I do when I got the error "Failed to allocate node
id..."?
This error is usually thrown if you have assigned the same id to other nodes in your configuration file. Each node should have a unique Id.
3) Is it a requirement to use multi-threaded data node?
It is not a requirement but recommended. Using mulch-threaded data node allows you to leverage modern computer architecture with multiply cpus to allow for your data processing queries to be processed much faster. As a result updates and queries will be done much faster
4) Where do I place my web server page for the user to access the
database?
Hmm.Not sure why you want to achieve here. This would be a separate question, if you are using PHP or usually any other language. You will have to have a web server configured. Place them in the root of the http directory to get started
So what's the idea behind a cluster?
You have multiple machines with the same copy of the DB where you spread the read/write? Is this correct?
How does this idea work? When I make a select query the cluster analyzes which server has less read/writes and points my query to that server?
When you should start using a cluster, I know this is a tricky question, but mabe someone can give me an example like, 1 million visits and a 100 million rows DB.
1) Correct. Every data node does not hold a full copy of the cluster data, but every single bit of data is stored on at least two nodes.
2) Essentially correct. MySQL Cluster supports distributed transactions.
3) When vertical scaling is not possible anymore, and replication becomes impractical :)
As promised, some recommended readings:
Setting Up Multi-Master Circular Replication with MySQL (simple tutorial)
Circular Replication in MySQL (higher-level warnings about conflicts)
MySQL Cluster Multi-Computer How-To (step-by-step tutorial, it assumes multiple physical machines, but you can run your test with all processes running on the same machine by following these instructions)
The MySQL Performance Blog is a reference in this field
1->your 1st point is correct in a way.But i think if multiple machines would share the same data it would be replication instead of clustering.
In clustering the data is divided among the various machines and there is horizontal partitioning means the dividing of the data is based on the rows,the records are divided by using some algorithm among those machines.
the dividing of data is done in such a way that each record will get a unique key just as in case of a key-value pair and each machine also has a unique machine_id related which is used to define which key value pair would go to which machine.
we call each machine a cluster and each cluster consists of an individual mysql-server, individual data and a cluster manager.and also there is a data sharing between all the cluster nodes so that all the data is available to the every node at any time.
the retrieval of data is done through memcached devices/servers for fast retrieval and
there is also a replication server for a particular cluster to save the data.
2->yes, there is a possibility because there is a sharing of all the data among all the cluster nodes. and also you can use a load balancer to balance the load.But the idea of load balancer is quiet common because they are being used by most of the servers. but if you are trying you just for your knowledge then there is no need because you will not get to notice the type of load that creates the requirement of a load balancer the cluster manager itself can do the whole thing.
3->RandomSeed is right. you do feel the need of a cluster when your replication becomes impractical means if you are using the master server for writes and slave for reads then at some time when the traffic becomes huge such that the sever would not be able to work smoothly then you will feel the need of clustering. simply to speed up the whole process.
this is not the only case, this is just one of the scenario this is only just a case.
hope this is helpful for you!!
I'm building a very small NDB cluster with only 3 machines. This means that machine 1 will serve as both MGM Server, MySQL Server, and NDB data node. The database is only 7 GB so I plan to replicate each node at least once. Now, since a query might end up using data that is cached in the NDB node on machine one, even if it isn't node the primary source for that data, access would be much faster (for obvious reasons).
Does the NDB cluster work like that? Every example I see has at least 5 machines. The manual doesn't seem to mention how to handle node differences like this one.
There are a couple of questions here :
Availability / NoOfReplicas
MySQL Cluster can give high availability when data is replicated across 2 or more data node processes. This requires that the NoOfReplicas configuration parameter is set to 2 or greater. With NoOfReplicas=1, each row is stored in only one data node, and a data node failure would mean that some data is unavailable and therefore the database as a whole is unavailable.
Number of machines / hosts
For HA configurations with NoOfReplicas=2, there should be at least 3 separate hosts. 1 is needed for each of the data node processes, which has a copy of all of the data. A third is needed to act as an 'arbitrator' when communication between the 2 data node processes fails. This ensures that only one of the data nodes continues to accept write transactions, and avoids data divergence (split brain). With only two hosts, the cluster will only be resilient to the failure of one of the hosts, if the other host fails instead, the whole cluster will fail. The arbitration role is very lightweight, so this third machine can be used for almost any other task as well.
Data locality
In a 2 node configuration with NoOfReplicas=2, each data node process stores all of the data. However, this does not mean that only one data node process is used to read/write data. Both processes are involved with writes (as they must maintain copies), and generally, either process could be involved in a read.
Some work to improve read locality in a 2-node configuration is under consideration, but nothing is concrete.
This means that when MySQLD (or another NdbApi client) is colocated with one of the two data nodes, there will still be quite a lot of communication with the other data node.