Template MySQL zabbix - mysql

I am trying to monitor a database using this template https://share.zabbix.com/databases/mysql/template-mysql-800-items, the instructions do not indicate where I should place each file and modify each configuration, does anyone have experience with this?

It's a bit of a large scale question, but in general, the XML file is imported in the Zabbix frontend, while the Perl script goes in /opt/mysql_check.pl (or wherever you point at in the userparameter entry in the agent daemon configuration file).

Related

How to migrate data between Couchbase servers?

I'm new to Couchbase. Does anyone know how to connect between Couchbase (CB) servers in order to migrate data? I want to migrate data from a production CB server to my local server.
I found here how to migrate between buckets, but within the same server.
Also, I could do the migration between different servers using a backend application (I'm using C#.Net) using N1QL, but I want to learn how to do the server to server migration since it is a pretty standard and common operation .
Many thanks in advance.
Do you need continuous migration? Or backup/restore would be enough?
If latter, you can use cbbackup/cbrestore tools http://developer.couchbase.com/documentation/server/current/backup-restore/backup-restore.html
Either cbbackup and/or cbrestore or just turning on XDCR to get the changes from production to your local server would work too.
Definitely some security implications on doing this, but that's up to you to figure out! :)
Thanks to #Ade Stringer who finally gave me the best solution, which is to use the cbtransfer tool. This tool simply needs the source and target urls of the servers (and the names of the buckets), which is ideal since -- in general-- one doesn't have access to the file system of the servers.
Here is a sample call:
cbtransfer http://10.10.10.68:8091 http://localhost:8091 -b SourceBucketName --bucket-destination TargetBucketName
Note that the first parameter is the source CB server (http://10.10.10.68:8091) and the second one is the target CB server (http://localhost:8091). The value of the -b parameter is the name of the source bucket and the value of the --bucket-destination parameter is the name of the target bucket.
In order to run this command in Windows, you must first go to the following folder:
C:\Program Files\Couchbase\Server\bin\
If one prefers to use the cbbackupmgr and cbbackup tools which were mentioned in the other answers, one needs to have access to the file system, which in my case was not possible. But still both tools are also useful nevertheless and I appreciate their answers.

Relationship between different hbase-site.xml for a Cloudera install

With a Cloudera install of HBase, I saw three places have config information :
/etc/hbase/conf/hbase-site.xml,
/usr/lib/hbase/conf/hbase-site.xml,
and /var/run/cloudera-scm-agent/process/*-hbase-MASTER
Which one exactly is in effect? Or maybe all of them do?
In all cases of hbase the /etc/hbase/conf/hbase-site.xml file is always read. The /usr/lib/hbase/conf/hbase-site.xml is a symlink to /etc/hbase/conf/hbase-site.xml so it is the same file.
Lastly, anything in /var/run/ is a runtime variable and in your case it is the Cloudera Manager Agent. The Manager Agents are responsible for the management console and logging amongst other tasks.
I hope that helps,
Pat
The config file used is /usr/lib/hbase/conf/hbase-site.xml.
Other files aren't symbolic links.
Since the same configuration information needs to be used for other processes like HMaster,RegionServer, /usr/lib/hbase/conf/hbase-site.xml file is synced at different locations while initializing/preprocessing of these daemons. Hence it is advised to make any configuration changes in /usr/lib/hbase/conf/hbase-site.xml file only.
Also you need to make the same changes to these file on all nodes in your cluster and restart the HBase daemons.
I hope these answer your question.
Per my search and learning, HBase actually has two types of hbase-site.xml files, one for HMaster/RegionServer, and the other for client.
In Cloudera's distribution, the hbase-site.xml file in folder /var/run/cloudera-scm-agent/process/*-hbase-MASTER is the config used by the running HMaster process. Similar is for RegionServer.
Yet the site.xml file under /usr/lib/hbase/conf/ and /etc/hbase/conf/, symlinked from one to the other (according to #apesa), is for client usage. If one starts HBase shell on an HMaster host or a RegionServer host, this client config file will be used so the shell application knows how to connect to the ZooKeeper quorum to obtain the running HBase service. If one wants to use the HBase service from a client host, then he needs to copy this client xml file to the client host.
For regular Apache installation of HBase, as was indicated in Sachin's answer, the same hbase-site.xml is used for both purposes, though the HMaster, the RegionServer, and the client processes will use only the options needed and ignore the rest.
From experimenting with the hbase binary version 1.2.0-cdh5.16.1, it appears to use the Java classpath to find the hbase-site.xml file to use, whether running as a server or a client shell. There is a configuration parameter (--config) you can pass to hbase to control the config directory used, which by default is ./conf (run hbase to view the documented help on this).
This observation is supported by other answers on this topic (e.g. Question 14327367).
Therefore, to answer your specific question, to determine which config file is used on your machine, run hbase classpath and find which of the 3 directories appears first in the classpath.

MySQL : Create custom data directory withtout Perl

MySQL provides a script for initialiazing a new data directory for databases storage.
Unfortunately, on the noinstall zip for Windows, it is only a Perl script.
I'd like to initialize a new data directory without using Perl, since I'm building an auto-install to be launched from several machines and I want to reduce the softwares installation duty.
Is there a workaround for this script that does not require to have Perl installed ?
As stated on the MySQL doc, you can just copy the data dir from the MySQL zip wherever you want.
Then you will just need either launch mysqld using the --datadir=new_path option, or specify datadir=new_path in the my.ini/my.cnf you'll be using.

Magento Module install SQL not running

I have written a module that is refusing point blank to create the tables within my mysql4-install-1.0.0.php file....but only on the live server.
The funny thing is that on my local machine (which is a mirror of the live server (i.e. identical file structure etc)) the install runs correctly and the table is created.
So based on the fact that the files are the same can I assume that it is a server configuration and or permissions problem? I have looked everywhere and I can find no problems in any of the log files (PHP, MySQL, Apache, Magento).
I can create tables ok in test scripts (using core_read/write).
Anyone see this before?
Thanks
** EDIT ** One main difference between the 2 environments is that on the live server the MySQL is remote (not localhost). The dev server is localhost. Could that cause issues?
Is the module which your install script is a part of installed on the live server? (XML file in app/etc/modules/, Module List Module for debugging.)
Is there already a record in the core_resource table for your module? If so, remove it to set your script to re-run.
If you file named correctly? The _modifyResourceDb method in app/code/core/Mage/Core/Model/Resource/Setup.php is where this file is include/run from. Read more here
Probably a permissions issue - a MySQL account used by public-facing code should have as few permissions as possible that still let it get the job done, which generally does NOT allow for creating/altering/dropping tables.
Take whatever username you're connecting to mysql with, and do:
SELECT User, Host
FROM mysql.user
WHERE User='your username here';
This will show you the user#host combos available for that particular username, then you can get actual permissions with
show grants for username#host;
Do this for the two accounts on the live and devlopment server, which will show you what permissions are missing from the live system.
In the Admin->System->Advanced section is your module present and enabled?
Did you actually unpack your module to the right space, e.g. app/code/local/yourcompany/yourmodule ?
Do you have app/etc/modules/yourmodule.xml - I believe that this could be the overlooked file giving rise to your problem.
the cache could be the culprit, if you manually deleted the core_resource row for your module in order to make the setup sql run again, you have to also flush the cache
probably a difference between dev and production servers is cache settings, that would explain why you only see this in production
For me, the issue appeared using Windows for development. Linux system is case sensitive. In my config.xml the setup section was named camelCase while the folder was named all-lowercase. Making them the same made the script run.

mysql - how do I load data from a different configuration

I'm not sure if this will make sense, but I'll give it a shot.
My hard drive went down and I had to reinstall the os along with all my webserver configuration,etc. I kept a backup of the mysql database, but it doesn't contain all the tables...I added a couple tables after my last backup.
I have access to the hard drive and the directory where the mysql data files are stored from the failed hard drive, but I don't know how to retrieve the data into my new mysql database. Is it even possible to get the raw data files from mysql and load them into a different instance? I'd even be happy if there was some way for phpmyadmin to show the data files, then I could dump out to a backup txt file, and reload them into my new configuration.
Any help will be appreciated. thanks.
well, bad news...I can't access the drive anymore. As I tried to copy the files the drive went totally down. So, I'll just redo the couple tables. Thanks for your help anyway.
Although not recommended or reliable, it is possible to simply copy the data without using mysqldump. It might help if MySQL was shut down in a controlled manner (which does not appear to be the case) and the source and target environments are as similar as possible in terms of lib and MySQL versions.
The datafiles should be compatible - you can copy the data directory to ubuntu, and edit the /etc/mysql/my.cnf to point to the new directory.
The only catch might be where ubuntu being case sensitive effects the tables.