Azure App Service Backup Partial for Wordpress? - mysql

App Service on Standard plan, using MySQL in-app database. App is stopped, and a manual backup always completes as "partial". The configuration for the backup blade shows no database exists. I am concerned that the database in the filesystem is not being included, so the restore will fail.
How can I be confident in Azure App Service Backup?
Thanks.
Added Information: Backup Log
CorrelationId: 19a70ee5-7158-49e9-8f58-35e39f231a34
Creating temp folder.
Retrieve site meta-data.
Backing up the databases.
Failed to backup in-app database. Skipping in-app database backup.
Backing up site content and uploading to the blob...
Uploading metadata to the blob.

Partially Succeeded means that there were likely some files which could not be backed up, because they were locked for some reason. When this happens the backup process skips them and backs up the rest of the site and database if configured. You should be able to see which files were skipped in the log file. If for some reason you do not need these files backed up you can skip them by following the instructions in section “Backup just part of your app” here.
Stopping Locked On-Demand/Triggered Azure Webjob sometimes is the reason for a Partially Succeeded backup status

Using Azure website backup feature means you backup below things
1. Your web app configurations
2. Your actual web app file contents
3. The Azure MySQL database - This is optional. i.e. you can choose to back it up or not.
You can also try using using schedule backups. There is two options for this either schedule a full backup or partial backup using kudu console.
You may refer to this doc https://learn.microsoft.com/en-us/azure/app-service/manage-backup

Related

What is the best way to organize SQLite sync log between local and file server's copy?

I have a document, which is SQLite DB, stored locally. Need the way to organize some Cloud service, to sync all changes made to document, incrementally. Idea is to store log of transactions in JSON format and in certain moment (manually or by time) upload this log file to Cloud server, where changes will be applied to server copy. Cloud server should be as simple as possible, not true SQL server, like MySQL etc. Are there any pitfalls with this approach?

How to avoid optaweb-employee-rostering rebuild persisted data on server restart

I'm running optaweb-employee-rostering in a dockerized Wildfly server, persisting data with MySql database running in a container too. The .war file is not built in sever's Docker image, it's manually deployed in it via Wildfly's admin interface. Every time container is stopped a restarted, the application rebuild sample data, deleting any data saved during usage, so that the final behavior is the same as ram based storage: the data is lost if the server stops.
Is there a way to avoid this behavior and keep saved data on server restart?
This is caused by the hbm2dll value here and due to the Generator's post construct. In the current openshift image there are environment variables to change that.
We're working on streamlining this "getting started" and "putting it into production" experience, as part of the refactor to react / springboot.

Back up MySQL using phpMyAdmin

I have used MySQL for my application written by PHP. After a time its data will be great and I need to make a backup from them. Also, I need I can restore the backup data whenever I need. My question is if phpMyAdmin can make backup and resotore it secure and completely without any data lose?
(I have both MyISAM and innoDB in my database structore)
Also, if you know any other IDE to make backup and restore it without showing the database structures and tables to the end-user, please tell me their names.
Thank you.
If you're running MySQL on your own server you may copy the database folder, but the MySQL server would have to be stopped first. Anyway I'd recommend dumping the databases through phpMyAdmin (export function) or via the command line (mysqldump). Using the latter, you may write a batch script that also compresses and encrypts the content of the dump file.
Using the built in Import and Export? The only data loss would be everything after the backup until the time the backup was imported.
Securely? That's an entirely different topic. There's too many things to consider to call anything secure but if you're using https or on a trusted LAN, then yes, I guess it's secure.
I think MySQL Workbench can do exports and imports.
If would you wish to backup MySQL database more securely and periodically without loss of data to remote server or local drive (local hard disk or mapped drive), then you can try Vembu StoreGrid http://www.vembu.com/, a leading backup software trusted by more than 2700 service providers world wide. Once you configured single backup by selecting entire database then automatically newly added database will be backedup on upon next backup time. Also, StoreGrid backups only the incremental bytes of changes on next incremental schedule.
Vembu has a solution for almost everyone, just check them below:
For Service Providers : StoreGrid Service Provider Edition
For Business/Offices : StoreGrid Professional Edition
For Home Users: Vembu Home
For Resellers who don't want to have their own storage : Vembu Pro
For Hosting Providers: StoreGrid Hosting Provider Edition (yet to be released)
For your requirement, we would suggest the Professional Edition http://storegrid.vembu.com/online-backup/network-backup.php of StoreGrid. Try it!!!
Regards,
Thileepan
vembu.com

Schedule data transfer service in MySQL

I have two DB servers, i.e Server1 and Server2.
I want to transfer data from Server1 to Server2 every morning (at 9:00AM lets say).
How can we achieve it?
Can this transfer of data be done automatically?
My choice on a windows machine is to create a batch file that runs mysqldump with the parameters that suite you best.
This batch file can be tied to the windows scheduler for an automated execution at any point in time.
you can consult this page for some guideline by the MySQL community.
Now that you have a dump of your DB your script should send it to the destination server and deploy it locally (can also be done automatically).
i use the parameters for mysqldump that allows me to incrementally add data to the new server
i transfer the dump using DeltaCopy which is a community windows wrapper around the rsync program (if interested you should check Syncrify on that page as well)
both of those last point allow for an extremely faster process than copying the entire DB everytime.

Coldfusion and mySQL - seeking recommendations for general and off site backup strategy

CF9, Windows Server 2008 Standard, IIS7, mySQL 5.1.48 community.
I have managed to get CF to take a database mySQLdump which I was going to run as a nightly cfschedule task, with a server time based lock on the application controlled in application.cfc
That will get me a local copy, but whats the best strategy to encrypt the datadump.sql text file (and what would you use to do so for sensitive personal information) and transfer to an off site location, cfftp?
For my personal sites, I use a ColdFusion scheduled job that runs a mysqldump, and then stores the updated backup in a dropbox account. I've never bothered encrypting the backups, though that does seem like a potential hazard. What if the encrypted file becomes corrupted? Then you can't even get a partial restore from uncorrupted sections of the file.
Can't you windows' scheduled tasks for the backup and EFS to encrypt it?