Im trying to find a way to monitor couchbase cluster settings such as memory, email configurations, etc.
Ideally it would be a cli/REST command that describes entire cluster configurations or its particular components.
Couchbase version: 4.5.1- Community Edition
Will appreciate for any advice.
In current CB versions, you can get the email info using http://hostname:8091/settings/alerts and memory info using http://hostname:8091/pools/nodes
For some reason, I cannot seem to access the CB archived documentation to confirm this. Try it out and see if these APIs are available in 4.5.x. The pools API should be available. Not sure on the alerts API.
Related
Quick bit of advice if i may. I'm a startup company and developing a new mobile app that i intend to query and update data from a cloud MYSQL database, using a restful webservice and JSON. I am pretty new to this, but ok on the theory.
I originally thought i could use Dropbox to host the database and somehow install a Tomcat server also, to act as the http server, but i cant find anything online that says this is achievable. I've now found a temporary site heliohost.org, which offers free hosting, so i'm looking into that.
Does anyone have advice on a [low cost] longer term production cloud service for MYSQL database? And am i right that a good approach is to create a restful webservice in Eclipse and then somehow deploy that to the Tomcat server in the cloud, so that my app can then issue calls to it via the CN1 available methods.
There is quite a lot out there and much of it is self-promoting their own sites so was after some independent advice please.
Many thanks in advance.
You can't host and access an SQL server over the network from a device as access is remarkably unreliable and insecure. You will need some form of hosting. I used Linode for our online course since they are very affordable (5USD per month) but I've used AWS, Digital Ocean and others. They are all good.
You are correct that you will need to create a webservice, I used tomcat in the past but for the latest course I chose SpingBoot which is easier and more modern.
Using a mobile backend to store and retrieve data is a vast topic to discuss where different tools and services can be leveraged based on your application use cases.
However directly accessing MySQL server from your mobile client wouldn't be a recommended approach both in term of security as well as performance at scale.
Few options you can consider.
Developing the mobile backend with Amazon Mobile Hub where you can find different architectures and services. For example.
Using AWS DynamoDB as a Mobile Backend tightly controlling access permissions with AWS Cognito and DynamoDB Fine Grained Access Control.
Using Cognito Sync as a storage medium to Synchronize data from Mobile App to AWS and then using triggers to share and push data & etc.
Developing a REST API for the mobile backend using AWS Services such as API Gateway, Lambda & DynamoDB(Or Relational Databases like MySQL, Postgres SQL & etc. with RDS)
We are experiencing the issue when trying to connect to the cluster after updating the version of Java SDK.
The setup of the system is as follows:
We have a web application that is using Java SDK and a Couchbase cluster. In between we have a VIP (Virtual IP Address). We realise that isn’t ideal but we’re not able to change that immediately since VIP was mandated by Tech Ops. VIP is basically only there to reroute the initial request on application startup. That way we can make modifications on the cluster and ensure that when application starts it can find the cluster regardless of the actual nodes in the cluster and their IPs.
Prior to the issue we used JAVA SDK version 1.4.4. Our application would start and Java SDK would initiate a request on port 8091 to VIP. Please note that port 8091 is the only port open on VIP. VIP would reroute the request to one of the node cluster currently in use the cluster would respond to Java SDK. At that point Java SDK would discover all the nodes in the cluster and application would run fine. During up time if we would add, remove a node from the cluster Java SDK would update automatically and everything would run without the issue.
In the last sprint we updated the Java SDK to version 2.1.3. Our application would start and Java SDK would initiate a request on port 11210 to VIP. Since this port is not open the request would fail and Java SDK would throw an exception:
Caused by: java.lang.RuntimeException: java.util.concurrent.TimeoutException
at com.couchbase.client.java.util.Blocking.blockForSingle(Blocking.java:93)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:108)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:99)
at com.couchbase.client.java.CouchbaseCluster.openBucket(CouchbaseCluster.java:89)
No further request would be made on any port.
It appears the order in which port are being used has been changed between versions. Could somebody please confirm, or dispute, that the order in which ports are being used for cluster discovery has been changed between versions. Also could somebody please provide some advice on how we could resolve the issue. We are trying to understand the clients behavior, if we could open all those ports on the VIP would the client still then function correctly and at full performance?
The issue is happening on our production environment which we cannot use for testing out potential solutions since it will interfere with our products.
In v2.x of the Java SDK, it defaults to 11210 to get the cluster map to bootstrap the application. This is a huge improvement actually as now the map comes from the managed cache and not the cluster manager (8091). The SDK should use 8091 as a fall back if it cannot get the map on 11210 though. Regardless, you really want to get that map from 11210, trust me. It cleans up a lot of problems.
To resolve this long term and follow Couchbase best practices, upgrade to the Java 2.2.x SDK, get rid of the VIP entirely and go with a DNS SRV record instead. That gives you one DNS entry for the SDK connection object and you just manage the node list in DNS. It works great. I say SDK 2.2 as the DNS SRV record solution is fully supported there, in 2.1 it is experimental. VIPs are specifically recommended against by Couchbase these days. In older versions of the SDKs it was fine to do this and it helped with limiting the number of connections from the app to the DB nodes, but that is no longer necessary and can actually be a bad thing.
in addition to Kirk's long term answer (which I also advise you to follow), a shorter term solution may be to deactivate the 11210 bootstraping (carrier bootstrap) through the CouchbaseEnvironment by calling bootstrapCarrierEnabled(false) on the builder.
I don't guarantee that it'll work with a vIP even after that, but that may be worth a try if you're in a hurry.
From what I gather, the only way to use a MySQL database with Azure websites is to use Cleardb but can I install MySQL on VMs provided in Azure Cloud Services. And if so how?
This question might get closed and moved to ServerFault (where it really belongs). That said: ClearDB provides MySQL-as-a-Service in Azure. It has nothing to do with what you can install in your own Virtual Machines. You can absolutely do a VM-based MySQL install (or any other database engine that you can install on Linux or Windows). In fact, the Azure portal even has a tutorial for a MySQL installation on OpenSUSE.
If you're referring to installing in web/worker roles: This simply isn't a good fit for database engines, due to:
the need to completely script/automate the install with zero interaction (which might take a long time). This includes all necessary software being downloaded/installed to the vm images every time a new instance is spun up.
the likely inability for a database cluster to cope with arbitrary scale-out (the typical use case for web/worker roles). Database clusters may or may not work well when a scale-out occurs (adding an additional vm). Same thing when scaling in (removing a vm).
less-optimal attached-storage configuration
inability to use Linux VMs
So, assuming you're still ok with Virtual Machines (vs stateless Cloud Service vm's): You'll need to carefully plan your deployment, with decisions such as:
Distro (Ubuntu, CentOS, etc). Azure-supported Linux distro list here
Selecting proper VM size (the DS series provide SSD attached disk support; the G series scale to 448GB RAM)
Azure Storage attached disks being non-Premium or Premium (premium disks are SSD-backed, durable disks scaling to 1TB/5000 IOPS per disk, up to 32 disks per VM depending on VM size)
Virtual network configuration (for multi-node cluster)
Accessibility of database cluster (whether your app is in the vnet or accesses it through a public endpoint; and if the latter, setting up ACL's)
Backup / HA / DR planning
Someone else mentioned using a pre-built VM image from VM Depot. Just realize that, if you go that route, you're relying on someone else to configure the database engine install for you. This may or may not be optimal for what you're trying to achieve. And the images may or may not be up-to-date with the latest versions, patches, etc.
Of course, what I wrote applies to any database engine you install in your own virtual machines, where a service provider (such as ClearDB) tends to take care of most of these things for you.
If you are talking about standard VMs then you can use a pre-built images on VMDepot for that.
If you are talking about web or worker roles (PaaS) I wouldn't recommend it, but if you really want to you could. You would need to fully script the install of the solution on the host. The only downside (and it's a big one) you would have would be the that the host will be moved to a new host at some point which would mean your MySQL data files would be lost - if you backed up frequently and were happy to lose some data then this option may work for you.
I think, that the main question is "what You want to achieve?". As I see, You want to use PaaS solution with Web Apps or Cloud Service and You need a MySQL database. If Yes, You have two options (both technically as David Makogon said). First one is to deploy Your own (one) server with MySQL and connect to it from the outside (internet side). Second solution is to create one MySQL server or cluster and connect Your application internally in Azure virtual network. WIth Cloud Service it is simple but with Web App it is not. You must create VPN gateway in Azure VM and connect Your Web App to this gateway. In this way You will have internal connection wfrom Your application to Your own MySQL cluster.
I have a publicly accessed database on RDS that works like a charm from Netbeans. I would like to deploy my Java application on AWS. What is the simplest way to do this? I will only use the application for some very basic tasks, getting used to cloud computing working on a small scale. Is EC2 my best bet and is it possible to upload apps as easily as with the Google App Engine plugin. Can I use the same jdbc driver as I use locally, and can I use JPA against the database? I would rather not use Eclipse for now as I am in a bit of a hurry and need to get this working as soon as possible.
This is a lot of questions for one question, but I'll see if I can help you out.
1. Simplest Way to deploy to AWS
If this application is as simple as you say it is, the most cost effective solution while you're getting used to AWS will be to deploy to a micro instance and take advantage of the free tier. From Amazon:
AWS Free Tier includes 750 hours of Linux and Windows Micro Instances each month for one year. To stay within the Free Tier, use only EC2 Micro instances.
The simplest way to deploy directly from Netbeans is to use the integrated Elastic Beanstalk support. This saves you from having to configure things yourself.
Another option is to launch a Ubuntu AMI and install Tomcat. Create a WAR file from your application and place it where Tomcat can find it. I suggest using the first method.
2. Is EC2 my best bet?
This is a little open ended. For a nice learning experience as you get accustomed to AWS, the free tier for EC2 is a nice platform to learn with. If your application needs to eventually scale, using EBS is a pretty simple way to manage an application. My answer is an opinion because "best bet" depends solely on the requirements of your application, but I say yes.
3. Is it possible to upload apps as easily as with the Google App Engine plugin?
For simple applications I think so. I think it's even easier if you switch to Eclipse and use the toolkit for AWS. Whether Google App Engine or AWS is easier for you will once again depend on personal preference, the application, and your requirements.
4. Can I use the same JDBC driver as I use locally?
If you're using MySQL Connector/J then yes. Read this to understand how it works with RDS.
5. Can I use JPA against the database?
Yes. You'll change the endpoint from localhost to the endpoint of your RDS instance.
6. I would rather not use Eclipse for now...
Another personal preference, but the AWS toolkit for Eclipse is very easy to use and can speed the process up a bit.
I have different sites running with 4 to 5 server at each location. All the locations have one monitoring server with Nagios. Now I want to create a central location and want to combine all the nagios services running at each location. Can anyone please point me to some documentation for these type of jobs.
There are two approaches that you can take.
Install a new Nagios core as you did at each location and perform active checks on each of the remote hosts. You'll likely end up installing NRPE on each of the remote hosts at each location and can read this document for the details: http://nagios.sourceforge.net/docs/nrpe/NRPE.pdf. If your remote servers are Windows servers, you can use NSClient to much of the same things that NRPE does for Linux hosts. This effectively centralizes your monitoring server. I also wrote some how-to style entries for using NRPE to run privileged commands http://blog.gnucom.cc/?p=479 or to run event handlers http://blog.gnucom.cc/?p=458. If you get tired of installing NRPE, you can use my script here http://blog.gnucom.cc/?p=185. I also have instructions to install NSClient here http://blog.gnucom.cc/?p=201.
Install a new Nagios core as you did at each location and perform passive checks by instructing the remote Nagios cores to feed their results to the new central Nagios core's passive command file. I haven't done this myself, so I'm going to point you to the communities documentation here http://nagios.sourceforge.net/docs/2_0/passivechecks.html. You could probably look at my event handler post to set up event handlers that send checks to the main server.
From my personal experience, the first option I mentioned is easier to implement, and is far easy to administer. However, as your server fleet grows you'll start seeing major CPU bottlenecks with the main Nagios core. This is where passive checks would become beneficial, as the main Nagios core simply waits for critical checks to be sent to it rather than having to check them itself.
Hope this helps. :)
A centralized view tool may be what you are looking for. There are a number of different options available.
Nagiosfusion
MK Livestatus
Nagcen
Thruk