I'd created a Cloud SQL server for development without failover at the start.
Several days ago, I tried to create failover replica for it, and the operation had been waiting for completion whole day without further notification.
I think I should try it again today, and I cannot create it. The system is always responding with:
Could not complete the operation.
I'd changed the name or any random name for instance ID. The result is still the same.
Does anyone have such kind experience? How to solve this?
Did you choose zone?
When you create a failover, the "location" field will be auto-filled in the edit page. But the "zone" field is empty, you need to select one from the drop-down list.
Just encountered the same problem as you. And it works for me.
Related
I just used mySQL workbench to connect to my clearDB account which is connected to an azure web app. The problem is even thought I ran a query that drops/creates tables in the newly made schema that mirrors exactly the tables and data in my previous live server, I go to mysite.azurewebsites.com/wp-admin and the error is in establishing data connection. Site could not be found. Check if your database contains the following pages: wp_blogs, ..........
What could be the problem? Does this process just need a bit of time to propagate all the data?
EDIT: something to note, which might be a factor, when I ran the last query, it also included dropping/adding the table "wp_users" so all previous data was wiped and replaced with the info from a previous live server.
Normally you will see any changes made immediately. But because your database is hosted on a geoseparated cluster in circular replication there are some rare circumstances where this might not be true.
Specifically, if your delete/write went to one master and your read query went to another. Data propagation is normally immediate but if one of the nodes is offline or the system is unusually busy there can be a delay.
I have two applications running on openshift. Currently, it is for test purposes only, but the intention is to run those apps on openshift for real later on.
One thing that surprises me is that the data I enter gets deleted moreless regularly.
That is, when I return to the URL some days later, some tables are empty.
there are currently three developers and none of us did delete the data on purpose...
Does it have to do with our price plan? Is there any other explanation?
Any hints will be appreciated
Have you looked at the log files?
OpenShift does not do anything that would truncate or touch your database. The only other explanation would be that you are out of disk space but then you would get an error message that you are out of space.
If you can provide us with more detail then we might give you a better answer. What database gear are you using? Do you get any log messages? Does your data get successfully inserted into the table in the first place?
Openshift does not just go into your database gear and delete them, but until you can tell us more, we can't give you better answers.
So I'm going to attempt to create a basic monitoring tool in VB.net. Now I'd like some advice on how basically to tackle the logging and reporting side of things so I'd appreciate some responses from users who I'm sure have a better idea than me and can tell me far more efficient ways of doing things.
So my plan is to have a client tool, which will read from a MySQL database values and basically change every x interval, I'm thinking 10/15 minutes at the moment. This side of the application is quite easy, I mean I can get something to read a database every x amount of time and then change labels and display alerts based on them. - This is all well documented and I am probably okay with that.
The second part is to have a client that sits in the system tray of the server gathering the required information. Now the system tray part I think will probably be the trickiest bit of this, however that's not really part of my question.
So I assume I can use the normal information gathering commands and store them perhaps as strings and I can then connect to the same database and add them to the relevant fields. For example if I had a MySQL table called "server" and a column titled "Connection" I could check if the server has an internet connection for example and store the result as the value 1 for yes and 0 for no and then send a MySQL command to the table to update the "connection" value to either 0/1.
Then I assume the monitoring tool I can run a MySQL query to check the "Connection" column and if the value is = 0 change a label or flag an error and if 1 report that connectivity is okay?
My main questions about the above are listed below.
Is using a MySQL database the most efficient way of doing something like this?
Obviously if my database goes down there's no more reporting, I still think that's a con I'll have to live with though.
Storing everything as values within the code is the best way to store my data?
Is there anything particular type of format I should use in the MySQL colum, I was thinking maybe tinyint(9)?
Is the above method redundant and pointless?
I assume all these database connections could cause some unwanted server load, however the 15 minute refresh time should combat that.
Is there a way to properly combat delays with perhaps client updating not in time for the reporter so it picks up false data, perhaps a fail safe for a column containing last updated time?
You probably don't need the tool that gathers information per se. The web app (real time monitor) can do that, since the clients are storing their information in the same database. The web app can access the database every 15 minutes and display the data, without the intermediate step of saving it again. This will provide the web app with the latest information instead of a potential 29-minute delay.
In other words, the clients are saving the connection information once. Don't duplicate it in the database.
MySQL should work just about as well as anything.
It's a bad idea to hard code "everything". You can use application settings or a MySQL table if you need to store IPs, etc.
In an application like this, the conversion will more than offset the data savings of a tinyint. I would use the most convenient data type.
I need to add a column to my current table.
This table is used a lot during the day and night. i found out i need to alter using the alter command found here
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
I tested it on a development server.
i took about 2 hours to complete. Now I want to execute this on the production server?
will this stop my website?
Why not display a message on the site saying you will perform maintenance from midnight UTC time January 7 2012.
This way, you won't break any data, you will not get any mysql errors. you execute your ALTER and you start the site again once its completed (don't forget your code to make sure you have the right field etc..). Easy solution.
Stackoverflow does it, why not yours?
Yes, during an ALTER TABLE all reads and writes are blocked. If your website needs to use that table, requests will hang.
Try pt-online-schema-change. It allows reads and writes to continue, while it captures changes to be replayed against the altered table once the restructure is done.
Test carefully on your development server so you know how it works and what to expect.
It won't stop your website, but it will likely make it throw errors.
Of course there is no way to answer this without looking at all the code of your application.
The bottom line is, when in doubt schedule a maintenance window.
Make the production server to point to dev db (or) mirror of prodcution db for some time.
Alter the table in production
Deploy the code which talks to production db (with the new attributes)
P.S: I feel this is safer and a fool proof way (based on my experience).
I am new to MySQL from an admin point of view.
I have spent the last few hours googling with no luck and was wondering if anyone could point me in the right direction of either what to google for or a suggestion.
Basically I am looking for ideas on how best to monitor the data changes within a MySQL database so that I can at the end of a day look at the activity and either choose to rollback a few transactions or back to the last daily back up.
I think programatically there could be ways to do this with triggers but I am not sure if that is a good route to head down, it is just one that seemed possible to me.
roll back to a previous state. I think I will be able to do a daily dump of the database that could be rolled back to.
Cheers,
Rob
I would recommend triggers. I've used them to provide a replicated copy of a database and it works quite well. From within the trigger, insert a record into another table that indicates the operation performed and any data you need to associate with it.