Mysql Dashboard overwriting client (UUID issues) - mysql

I am using the MySQL dashboard and I have 2 servers which are master/slave to each other.
When I create the mysql agent, with a unique UUID, everything works fine. The agent connects to dashboard and displays correctly. Except that the hostname isn't honoured from agent.ini. However when I start the agent on the 2nd server, it overwrites the first one. The name is the same. The server UUID is the same (which is odd given that I manually configured the UUID) but it still pulls it from the database and not the INI in the server details. However, the host isn't the same.
In other words instead of having 2 instances, it takes over the previous one.
I disabled UUID discovery. I set the UUID, but can't seem to find any solution to this.
I hope someone can point me in the right direction before I loose all my hair.

Well darn it! Sometimes thinking two seconds solves it. Since this is replicated host and I have to replicate mysql database ( don't like to do that, I much prefer just replication needed DB and leave mysql database alone), the inventory was shared between hosts. Well now this was the issue.
To solve it I recreated the inventory, but simply :'truncate mysql.inventory' and restart the agent on the host.
All is good!

Related

ClearDB - does it take time to read the newest data?

I just used mySQL workbench to connect to my clearDB account which is connected to an azure web app. The problem is even thought I ran a query that drops/creates tables in the newly made schema that mirrors exactly the tables and data in my previous live server, I go to mysite.azurewebsites.com/wp-admin and the error is in establishing data connection. Site could not be found. Check if your database contains the following pages: wp_blogs, ..........
What could be the problem? Does this process just need a bit of time to propagate all the data?
EDIT: something to note, which might be a factor, when I ran the last query, it also included dropping/adding the table "wp_users" so all previous data was wiped and replaced with the info from a previous live server.
Normally you will see any changes made immediately. But because your database is hosted on a geoseparated cluster in circular replication there are some rare circumstances where this might not be true.
Specifically, if your delete/write went to one master and your read query went to another. Data propagation is normally immediate but if one of the nodes is offline or the system is unusually busy there can be a delay.

Auto-Deletion of Table Rows

I'm new to MySQL, but however I need MySQL to work as it will be at the center of my new SANS (Server Address Name System) system. The reason for this system is to provide a replacement system for gameservers, since the default Gamespy service that some games use is being switched off at the end of next month.
The function of MySQL in SANS is to store the IPs and ports of active gameservers (which are patched to send info to MySQL), and then make the clients (again, patched to retrieve the information from MySQL) add the servers to their in-game server lists.
Of cause, the issue here is that gameservers can easily go offline for any one of 1,000+ reasons, and we don't really want the client's game showing gameservers that are offline, mainly because:
If we need to block any fake gameservers, these fake gameservers will still be in the server list (and also the MySQL database)
It will clog up the server list very quickly
Temporary servers such as home, development and test servers will still be in the list
If a servers' IP and/or port changes for any reason (for example the server IP is dynamic), there will be duplicate servers in the list, and clients may not know which one to pick.
I've thought of a couple of solutions, including making the client ping each gameserver in turn to check to see if it is online, but this is not ideal for a couple of reasons:
The server computers' administrator may have WAN ping switched off, meaning that although our gameserver may be online, it won't show in the list
The pings of clients may be seen as suspicious behaviour to the various server administrators that administrate the networks that the server computers sit on, meaning that the client could be blocked because of this.
I've thought of a simple solution: get MySQL (or phpMyAdmin) to remove each table row 10 seconds after it has been added.
Is this sort of behaviour even possible?
I'm on Windows Server 2008 R2, with latest MySQL server and Xampp.
I think you could use a MySQL trigger to accomplish this (I'm not sure about the 10 second delay), but I believe there's a better solution:
You could add a column called Status to whichever table stores the gameserver information.
Then you could use flags to differentiate types of gameservers: fake, test, active, inactive, etc.
Next you would filter what the user sees to only show active gameservers.
If the server doesn't report back every 10 seconds, the flag is simply marked as inactive.
And finally you could schedule a job to run once a day to clean up records older than 24 hours.
If this doesn't work for your particular problem, let me know and I'll look into coding the trigger.

MySQL single DB accessed by two servers

I am trying to build a website that uses MySQL DB. What I am trying to do is make my database accessed by two servers, which means when server 1 is down server 2 can access the same database and the website continues working normally. I've read about multimaster replication but it does not seem to be what I need. And what happens when using a master slave replication and the master server goes down ? How it can be restored ?
Thanks for your help.
I think the master slave pattern is exactly what you're looking for. The master handles all the writes and the slaves handle all the reads. If your cloud hosting with someone like Rackspace or AWS they make it very easy to set up the data replication across each mode. As for your last sub question about what happens if the master goes down, I believe it is pretty straight forward to set up fallbacks for that too. There are likely several approaches but at the most basic level I know you can set up multiple db nodes (with a fallback algorithm) just like any other instance.
A final note... If its your first time doing this I highly recommend Rackspace because their support is amazing and they make a huge effort when you start to explain all your option and help you pick the best strategy.
Ps: retreading your question, it's a little unclear what you're trying to accomplish. You mention two servers accessing one DB and you also talk about redundant setups for multiple db instances. They're really two separate issues. The former is trivially easy because you can always just point more than one server to a db. As long as the credentials are right it will work. But the tricky part is keeping the data synched properly. If both are reading and writing the same tables things are going to bang together. That's where the master slave pattern comes into play. All the writes go through the master but anyone can read from any slave because the data gets replicated.

Connect to a MySQL master-master setup from code

For a new website we must connect to a MySQL master-master setup. This is a .NET website using NHibernate, but the same would also apply to Java or any other language. We chose this setup because we want the site to continue working if a database would go down. We don't like downtime.
Maybe I have a complete misunderstanding of how a master-master setup works (in MySQL), but the way I see it, you connect to your database as you'd normally do, but behind the scenes, MySQL replicates the data between the two databases. If you do a write, it can go to either master 1 or master 2, you normally wouldn't know (except that the auto-increment id would return a different value). If master A would somehow fail, master B will still work, thus no downtime, master A will be ignored until it goes up again, the data is replicated, and if all is well, master A will be back in the field again.
IF this is correct, and please correct me if my above rambling is wrong, do you need to do anything special in case one master goes down? If I connect to 192.168.1.50 (which is master A), what happens if master A goes down? Will MySQL somehow automagically connect me to 192.168.1.51 (master B) so my site will continue to work?
If I was NOT correct, how does MySQL master-master replication work then? Do I have to tell each query on which master it should be executed? That would make no sense, right, since if master A goes down, then all my queries on master A would still fail and the master-master setup doesn't help me at all.
So basically, I think my question is actually:
do I still connect to a single MySQL host (I'm using NHibernate but
that doesn't really matter), do I specify a single connectionstring,
and will MySQL know that there are two masters, or does my code change
in such a way that I need to specify connectionstrings for both
masters (how?), do some special magic to balance the queries between the two
servers, etcetera.
Am I missing anything else? Thanks!
Maybe I have a complete misunderstanding of how a master-master setup works (in MySQL), but the way I see it, you connect to your database as you'd normally do, but behind the scenes, MySQL replicates the data between the two databases. If you do a write, it can go to either master 1 or master 2, you normally wouldn't know (except that the auto-increment id would return a different value)
This is incorrect.
MySQL replication works by writing committed data (meaning either the changed rows or the actual SQL statements, depending on the replication mode) to a replication log, then shipping that log to the slaves, where they replay it and make the same changes.
In multi-master replication, each node is both a master and a slave, receiving updates from the previous machine in the loop, and transmitting them forward to the next machine. Each machine has a unique identifier that it uses when sending and receiving replication logs, allowing it to identify when data has come full circle.
This method is primitive but effective. It's also traditionally been a real pain in the rear end to manage and maintain. If at all possible, don't use multi-master in favor of other solutions. I use multi-master in production, and can say this from experience.
If I connect to 192.168.1.50 (which is master A), what happens if master A goes down? Will MySQL somehow automagically connect me to 192.168.1.51 (master B) so my site will continue to work?
When you connect to one machine in a multi-master loop, you are only connected to that one machine. If you need to be able to connect to multiple machines, should one be down, then you will need to handle that circumstance manually, either through modifications in your code or an intermediary load balancer.
Worse, when one machine in the loop does go down, the loop is broken. Let's say you have three, A, B, and C. The loop would be A => B => C => A. If B goes down, A can no longer transmit updates to C, meaning that C would be the only safe machine to connect to until B comes back up and the loop is restored.
In regards to auto-increment, take a look at auto_increment_increment and auto_increment_offset, two server variables that make auto-increment in multi-master replication setups possible. You should not, under any circumstances, use auto-increment in multi-master without having set up these two variables.
Server=serverAddress1, serverAddress2, serverAddress3;Database=myDataBase;Uid=myUsername;Pwd=myPassword;
you can use this connection string. but i didn't try.

Migrating server, don't wanna lose MySQL data. Is Master-Master setup viable?

I am moving to a new server and thinking about how to keep my 2 MySQL server data consistent is causing me to lose both sleep and hair.
I was thinking about using a Master-Master setup to ensure that I lose nothing in the process. How viable is that. Any potential gotchas?
Why does the old server ever need to be aware of data written to the new server? For this reason, make it a master-slave setup.
You do have to deal with the same type of configuration, for instance.. make sure the old server only uses odd id's, and the new server only uses even id's.
As soon as you shut down the old server (master), make sure nobody can write there anymore.
I'm assuming your entire website uses 1 server for both the DB and the webhosting. If this is the case, I want to add the following:
Don't rely on DNS to migrate your site, as this can take a very long time for certain users.
Consider the following:
old.example.org is the site on the old machine
new.example.org is the site on the new machine.
www.example.org is a CNAME to old.example.org.
When you do the cutover, you will perform the following steps:
The old DB server is shut down, or set to read-only.
www.example.org becomes a CNAME to new.example.org
old.example.org should now host a website that automatically redirects people to new.example.org.
This means that your users might for a while browse the url new.example.org directly. When the DNS is fully propagated your users will no longer be redirected, and automatically hit the new server when using www.example.org.
If you have a low-traffic site.. this can be much easier.. Simply point your old application to use the new MySQL database. Sure, it might seem a bit crazy to connect to a mysql server over the net; but if you're not dealing with too much data this is so much easier than any other solution..