upgrade mysql 5.0.x to 5.x on appserv windows - mysql

I want to upgrade my appserv mysql instalation from 5.0.x to 5.x.
I have some tables and views relationed with various web proyects and VB.net aplications in that.
Any body can help me to do that without data loss?

(Putting this in an answer as it's too long for a comment)
NB - I've not used AppServ so this answer is generic
The versions of software within AppServ appear to be old. Very old. MySQL 5.0.51b, PHP 5.2.6, Apache 5.2.8 are way behind with regards to security and features. The best thing you can do is to replace the whole stack with a newer one
If you do a quick Google search for WAMP installer, a plethora of available stacks are listed. The first one in the list uses MySQL 5.6.17, PHP 5.5.12, Apache 2.4.9. Again, not the newest, but much more recent and feature rich. It's also available in 32 and 64 bit versions
The first thing to do is to download a virtual machine system. (VirtualBox is a pretty simple one to get to grips with and runs on a variety of platforms). This is so that you can practise.
Spool up an instance of Windows (which is as close as possible to your live setup) and install your current version of AppServ and your applications which use it, take a snapshot (so you can roll back) and then work out slowly how to update to a new stack. Take lots of snapshots as you go.
You need to make note of your MySQL data directories and back up your Apache, MySQL and PHP configurations
It will take time to iron out the bugs and problems you find along the way. Do not be downhearted.
Once you have worked out how to update your stack without data loss, try your applications on the virtual machine. There is no point in upgrading your stack if your software is going to bomb out the second it start to run.
Once you're satisfied what all the steps you need are, roll back to the snap shot you took at the start and go through all the steps again. Then again. And keep on restoring/upgrading it until you are confident that you can do the update with the minimum of fuss and panic on the live system
I would recommend doing your update over two sessions. For both sessions, choose a quiet time to do it. Essentially, out of office hours is the best, early morning (after a good sleep) is even better.
During the first session (SESSION-1) the server offline, backup everything, then return the server to live. And when I say "backup everything", I mean EVERYTHING! Take this backup and restore it to a virtual machine. Go through your steps that you worked out before on this restored version to make sure everything is going to work. Make a note of anything that is different to the steps you worked out earlier.
When you've done your testing, you can do session two (SESSION-2). Again, take the server offline, run a differential backup on the system and a full backup of the MySQL databases. Update your WAMP stack (using the steps you worked out in SESSION-1) and bring it back online. Check that all your URLs and code still works.
After you've completed your checks, send triumphant emails to whoever needs to know, put a smug smile on your face for a job well done, pour yourself a large glass of whiskey (other drinks are available) and relax - you've earned it
Sorry that I can't give you definitive steps but I use Linux for all my PHP stacks so these steps are what I would do if I was upgrading them. I spent 3 months practising upgrading my servers then did all of them in a single night (I have separate MySQL servers so it was only the Apache/PHP side I was updating - much easier and quicker)
Hopefully some of this helps. Good luck

Related

best solution to sync 2 mysql databases in a laravel project

I have a laravel project that is supposed to run in a localhost.
but we needed to add the ability to do some modification while user of the app is away from his pc that the app runs on it's host.
I know i can deploy the whole project in an online server but this solution is not an option till now.
we have only a weak online server (it's slower a lot than localhost);
so we can use this weak online server for these situations when the user wants to do some modifications remotely which would happen from time to time almost two or three times a day while the localhost will have the heavy work of the rest of the day which may be more than 3 or 4 hundreds processes a day.
i can't make the whole load on the online server while it's very slow like that and we don't need online benefits a lot, just for those two or three times remote modifications that the app user may or may not need, so i can't trade off localhost speed for online benefits which i need only two or three times a day.
what solution can i do.
i knew about master-slave and master-master replication but it's not an option too.
is there any ideas and thank you in advance.
-------------- about the two environments (local and online)------------
local is windows running xamp stack (apache, mysql, php)
server is linux (don't know actually which distro but any way i can't install any tools there ... just php packages with composer)
I had the same problem for uploading my laravel project
just use FileZilla to upload your project, even with the worst internet speed you can do it.
and save yourself the trouble.
To answer your question, If I were you, I will create a sync_data table in the application. And the sync data table will have the responsibility to record the changes occurring for various entities.
For example, if you change customer data in the localhost, you will save an entry to sync_data like type=customer, id=10, action=update, synced=No. And using a cron you can push these updates -fetching the customer record by saved id- to your online server in regular intervals, which will not make your online server busy. Furthermore, your online users will have the latest data at least.

How to make a database migration where the target remains fully operational during it

At our company we are trying to migrate data from an old Local SQL Server database to a RDS MySql database using SSIS. The original database is roughly 4GB in size and we are required to do the migration without taking down the production servers. The dev team reports that the migration runs fine with data being transferred, but after several hours (roughly 8 hours, but it's not exact. Sometimes it's less sometimes it's more) the connection abruptly closes. We have tried everything we can possibly think of on our side but we don't know what else could be going wrong. Based on their tests and ours, we think it could be the instance is closing the connection after being open for too long. Does anyone know what could be causing this?.
We need another alternative tool to make the migration and the target databases remains fully operational during the process?
I recommend you try the MySQL workbench 6.3 that oracle has out which has a piece precisely designed for your purpose. It is under GNU license so they have a community version which is free. There is also Data Loader which has a free trial version. The standard version is only $99. You can use logical export and convert it, so there will be no down time. GoldenGate would be perfect, but it is crazy expensive. I know people who have used Kettle to do what you are doing. Kettle is open source but you will have to write transforms so it will be a bit more tedious. With SqlServer you can clone the database, and then use the cloned version to do whatever you need to do to get it converted to MySql, bring it down, whatever, while the original stays up.
Cheers
Why cannot a 4GB database be brought down for a bit? And why would a 4GB database take 8 hours using SSIS ? I commonly move terabytes around in less time than that. That is in an Oracle shop, but still...

Any reason NOT to use subdomain for development?

I was originally planning on using a local machine on our network as the development server.
Then I had the idea of using a subdomain.
So if the site was at www.example.com then the development could be done at dev.example.com.
If I did this, I would know that the entire software stack was configured exactly the same for development and production. Also development could use the same database as production removing the hassle of syncing the data. I could even use the same media (images, videos, etc.)
I have never heard of anyone else doing this, and with all these pros I am wondering why not?
What are the cons to this approach?
Update
OK, so its seems the major no no of this approach is using the same DB for dev and production. If you take that out of the equation, is it still a terrible idea?
The obvious pro is what you mentioned: no need to duplicate files, databases, or even software stacks. The obvious con is slightly bigger: you're using the exact same files, databases, or even software stacks. Needless to say: if your development isn't working correctly (infinite loops, and whatnot), production will be pulled down right alongside with it. Obviously, there are possibilities to jail both environments within the OS, but in that case you're back to square one.
My suggestion: use a dedicated development machine, not the production server, for development. You want to split it for stability.
PS: Obviously, if the development environment missed a "WHERE id = ?", all information in the production database is removed. That sounds like a huge problem, doesn't it? :)
People do do this.
However, it is a bad idea to run development against a production database.
What happens if your dev code accidentally overwrites a field?
We use subdomains of the production domain for development as you suggest, but the thought of the dev code touching the prod database is a bit hair-raising.
In my experience, using the same database for production and development is nonsence. How would you change your data model without changing your code?
And also 2 more things:
Its wise to prepare all changes in SQL script, that is run after testing from different environment not your console. Some accidental updates to live system made me headake for weeks.
Once happend to me, that restored backup didn't reproduced live system problem, because of unordered query result. This strange baviour of backup later helped us find the real problem simplier, than retrying on live system.
Using the production machine for development takes away your capacity to experiment. Trying out new modules/configurations can be very risky in a live environment. If I mess up our dev machine with an error in the apache conf, I will just slightly inconvenience my fellow devs. You will be shutting down the live server while people are trying to give you their money.
Not only that but you will be sharing resources with the live enviroment. You can forget about stress testing when the dev server also has to deal with actual customers. Any mistakes that can cause problems on the development server (infinite loop taking up the entire CPU, running out of HDD space, etc) suddenly become a real issue.

When to switch from SQLite to MySQL in production?

I am developing a web application in Django. My application is already up, and some users are are using it (say about 5-10). The database is SQLite. Should I move to MySQL now?
Or, wait till the user base increases? I don't have any user registration feature, yet. The basic usage of app is - problems are served n users solve them.
Move now. It'll be a pain to move later. At least right now if you take your website offline for a few hours it won't be noticeable. Later, that will be a problem. (Not to mention, you'll probably have to write a script to move data from your SQLite database to MySQL, which is a pain in the ass in and of itself.)
I don't get why using SQLite for development and then deploying it with MySQL.
Why don't develop and deploy the same RDMS?
Definitely move to MySQL now - on both development and production (and staging?). The earlier you do it, the less users you disrupt and the smaller and simpler the migration will be.
Do it on development first so you see what problems you're going to run into, and resolve them before migrating to production. If you were to keep using SQLite for development, and MySQL for production - you would run into problems with the differences eventually.

Which server can I decide for MySQL, windows or Unix/Linux/Ubuntu/Debian?

I'm working on a SaaS project and mysql is our main database. Our applications is written on c# .net and runs under an windows 2003 server.
Considering maintainance, cost, options and performance, which server plattaform can I decide for MySQL hosting, windows or Unix/Linux/Ubuntu/Debian?
The scenario is as following:
The server I run today has a modarate transaction volume. Databases increase 5MB daily and we expect to increase 50MB in couple of months and it is mission critical.
I don't know how big the database is going to be. We rent a VPS to host application and database server.
Most of our queries are simple but our ORM Tool makes constantly use of subqueries. Also we run reports simple and heavy ones. Some them runs after user click, but most runs in order to the queue.
Buy an extra co-lo space will be nice as we got more clients. That's SaaS project after all.
When developing, you can use your Windows box to also run a MySQL server. If and when you
want to have your DBMS in a separate server it can be in either a Windows or Linux server.
MySql and supporting tools for backup etc probably have more choices in Linux.
There are also 3rd party suppliers who will host your MySQL database on their servers. The benefit is they will handle backups, maintenance etc.
Also: look into phpMyAdmin for use as a great admin tool.
Larry
I think you need more information to make an informed decision. It's hard to just pull out a "best" answer based on no specific information.
What is your expected transaction volume?
How big will the database get?
How complex are your queries, ie are they long running or relatively quick?
Are you hosting the application on your own server at your own location? If you have to buy extra co-lo space maybe an extra server isn't the best option.
How "mission critical" is this database? Ie maybe you need replicated servers to ensure stability.
There is a server sizing tool online at http://www.sizinglounge.com/, so you should check that out. It sounds like your server could be smaller than their smallest tier, but it should be a good place to start.
If this is a mission critical application you need to do some kind of replication to an extra server in case the primary one fails, so you are definitely looking at two systems. This has to be in addition to a good backup plan.
Given that you are uncertain about how big it could get you might just continue renting a server. For your backup one idea would be to look at running MySQL on an Amazon EC2 instance. BTW it is important to have a remote replicated server. If you have two systems next to each other and an environmental problem comes up, they could both be out of commission at the same time. But with a remote copy your options are open to potentially working around it.
If you run a lot of read-only queries locally and have your site hosted somewhere, it might make sense to set up a local replicated database copy to query against. That could potentially improve both your website and local performance quite a bit. Plus it would give you some good piece of mind having a local copy under your control.
HTH,
Brandon