Database synchronization on Azure with different Affinity Groups - mysql

From time to time, we have the need to copy/replicate production database into development database. Database being quite big, and servers on different affinity group.
What's the best way to do it without incurring in traffic costs (if at all possible) ? Development and Production ideally should be on different Affinity Group but makes this scenario harder.
If they were on same affinity group, can I connect like they were on the same LAN ?

As you're talking about an affinity group I'm assuming you're using a database server on a Virtual Machine.
Traffic costs only apply when sending data out of the data centre, so irrespective of whether the servers are on the same affinity group or not you will not incur data transfer costs as long as you deploy everything to the same Windows Azure dada centre.
If you deployed all the virtual machines on the same virtual network they will simply see each other on the network.
Another option, which does not require the servers being on the same network (or even on the same account), when using SQL Server, is to export the database to blob storage and import it on the other server.
This would also be the mechanism I would use if you were using Windows Azure SQL Database

Related

Data transfer between local server and AWS server

We have an ASP.NET MVC 5 web application that reads data locally from within the same server. This server is in Europe. However when trying to read the same data from an AWS server based in Sidney the lag is many times greater. A ping from our local server to the AWS server in Australia takes 5 seconds. The data needs to be located in Australia because of data protection laws issued by the Australian Government. The database is MySQL. We have created a VPN between both servers and made no difference.
What are our options in order to improve the speed between these two servers?
If it is a web application serving content to users over internet you can use CloudFront distribution to reduce your latency issues.
https://aws.amazon.com/cloudfront/
If you are trying to connect your servers from your data center to AWS
Use AWS Direct Connect, this will provide a dedicated link between your on-premise datacenter and to the AWS Servers; Decreasing your latency by a lot.
https://aws.amazon.com/directconnect/
AWS runs your application regardless of which platform(ASP.NET, JAVA, C...) it is, AWS only provisions infrastructure. You don't need to worry about the platform on which your application is running and what database it connects to. You just need to ensure that all the network connections are properly open so that your servers can communicate with AWS servers.

Can I install MySQL on the VMs provided in Azure Cloud Services?

From what I gather, the only way to use a MySQL database with Azure websites is to use Cleardb but can I install MySQL on VMs provided in Azure Cloud Services. And if so how?
This question might get closed and moved to ServerFault (where it really belongs). That said: ClearDB provides MySQL-as-a-Service in Azure. It has nothing to do with what you can install in your own Virtual Machines. You can absolutely do a VM-based MySQL install (or any other database engine that you can install on Linux or Windows). In fact, the Azure portal even has a tutorial for a MySQL installation on OpenSUSE.
If you're referring to installing in web/worker roles: This simply isn't a good fit for database engines, due to:
the need to completely script/automate the install with zero interaction (which might take a long time). This includes all necessary software being downloaded/installed to the vm images every time a new instance is spun up.
the likely inability for a database cluster to cope with arbitrary scale-out (the typical use case for web/worker roles). Database clusters may or may not work well when a scale-out occurs (adding an additional vm). Same thing when scaling in (removing a vm).
less-optimal attached-storage configuration
inability to use Linux VMs
So, assuming you're still ok with Virtual Machines (vs stateless Cloud Service vm's): You'll need to carefully plan your deployment, with decisions such as:
Distro (Ubuntu, CentOS, etc). Azure-supported Linux distro list here
Selecting proper VM size (the DS series provide SSD attached disk support; the G series scale to 448GB RAM)
Azure Storage attached disks being non-Premium or Premium (premium disks are SSD-backed, durable disks scaling to 1TB/5000 IOPS per disk, up to 32 disks per VM depending on VM size)
Virtual network configuration (for multi-node cluster)
Accessibility of database cluster (whether your app is in the vnet or accesses it through a public endpoint; and if the latter, setting up ACL's)
Backup / HA / DR planning
Someone else mentioned using a pre-built VM image from VM Depot. Just realize that, if you go that route, you're relying on someone else to configure the database engine install for you. This may or may not be optimal for what you're trying to achieve. And the images may or may not be up-to-date with the latest versions, patches, etc.
Of course, what I wrote applies to any database engine you install in your own virtual machines, where a service provider (such as ClearDB) tends to take care of most of these things for you.
If you are talking about standard VMs then you can use a pre-built images on VMDepot for that.
If you are talking about web or worker roles (PaaS) I wouldn't recommend it, but if you really want to you could. You would need to fully script the install of the solution on the host. The only downside (and it's a big one) you would have would be the that the host will be moved to a new host at some point which would mean your MySQL data files would be lost - if you backed up frequently and were happy to lose some data then this option may work for you.
I think, that the main question is "what You want to achieve?". As I see, You want to use PaaS solution with Web Apps or Cloud Service and You need a MySQL database. If Yes, You have two options (both technically as David Makogon said). First one is to deploy Your own (one) server with MySQL and connect to it from the outside (internet side). Second solution is to create one MySQL server or cluster and connect Your application internally in Azure virtual network. WIth Cloud Service it is simple but with Web App it is not. You must create VPN gateway in Azure VM and connect Your Web App to this gateway. In this way You will have internal connection wfrom Your application to Your own MySQL cluster.

Best solution for automated collection of data from remote MySQL servers

I have done extensive research, i feel that i have good candidates but i still lack enough knowledge to decide which one i should implement, ideally i would like to hear from someone that actually implemented a solution to a similar problem.
The Problem
Our project consists of a community of distributed nodes (25 nodes). The nods run on Linux computers, and are installed in the typical residential setting (behind a NAT), with wide dispersion geographically and ISP wise.
Our software on the node collects a variety of its own unique data which is logged to MySQL DB on the local host (node) which is not WAN accessible directly. We also have a Web interface for each node that uses the local node DB to allow the local node user to visualize certain data and parameters; this is only accessible on the LAN.
We typically set-up and maintain an open port for SSH from our labs to each node. All remote DB on nodes have the exact same schema but completely different data. We need an automated way to collect all data from all the nodes and get them to our WAN accessible lab servers (Windows 7 servers, but can be Linux if it provide a better solution). We have narrowed the option as follow:
Solutions:
Create a .bat script that sequentially connect to each node over SSH to import data.
Use the web interface that runs on each node to periodically query the local db then save that data to a central MySQL server. I know i can connect to two db in PHP. seems to be doable here.
Use the MySQL supported “slave-master” replication setup which will duplicate all remote databases on the server.
Use the MySQL supported federated engine setup which will link local tables to remote ones.
Questions:
Are all these viable solutions?
Any major cons i should be aware of for the viable ones?
is there better solutions available (paid or otherwise) ?

Single Store CRM application to Multi Store CRM - Single Local database server to Multi Store Multi location

My company has Desktop application developed in vb.net using devexpress controls. Back End database is MySQL.
Company is in retailing and have 2 retail stores in in same city. Both stores always stay busy and customers are always in waiting at the counter. Basically, it is desktop based CRM application which has lot of modules inside it apart from invoice/Receipt module, it has other modules like Delivery module, installation module, Service/Repair module, Account Receivable module and many other modules used by various back office departments of the company. Other resources/hardware such as Barcode Printer, Receipt Printer, and Barcode scanner are connected to the CRM on Desktop PC.
Currently, there are around 55 clients always connected to server and using application.
Problem:
Till couple of weeks back, company had no issue using this desktop application and single MySQL server as all clients were connected via LAN or WLAN.
Now situation has changed, and new requirement has raised: Company has planned to open new stores at very far distance. Such stores cannot be connected to current central database via LAN or WLAN. Each new branch would have around 20-30 clients, say “Branch Clients”
Also, there would be field executive who will be working from their laptop. Say “Remote Clients”. They will just have 3G internet connection on their laptop.
Thought 1: Install desktop application at all branch PCs, and connect them to central MySQL database server over the internet.
Not possible: Connection over the internet would be very slow for fetching such huge data. Data is really huge For, e.g. if client opens “Customer Master”, then there would be more than 600,000 rows which takes lot of bandwidth and time to open over the internet. And there are many more such modules which loads lot of data.
Also, in case of losing internet connection, clients would not able to operate the application. Customer waiting in line to make receipt would go crazy if they have to wait for long.
Thought 2: Install new MySQL server at branch store, all the desktop PCs then would be connected to that local branch server. And then that local branch server would be connected to central server via MySQL replication option.
Not possible: Since MySQL replication has limitation of only one way replication, we cannot implement this structure. Application requires to move data from central server to branch server and from Branch to Central in real-time. Also, MySQL replication engineering has limitation to replicate only with one server only. In that case, we cannot replicate with multiple branch stores. There is an option of cluster server, but company cannot afford licensing cost.
Thought 3: Somebody suggested me that I should transfer entire desktop application into Web Application and get cloud server for database.
Not possible: I think looking at current requirement (fast access), environment (retail store-pos) and hardware (printers, scanners) connected to client - it is not advisable to have web application and cloud database server. Also in the event of no internet, entire store would go down.
Thought 4: Somebody suggested me that I should move from MySQL server to MSSQL and keep desktop application as it is. MSSQL has capability to sync with multiple servers in real-time over the internet. It has no limitation like MySQL’s one way replication and only one replication connection.
I guess, to make faster and constant database connection, installing local branch server is highly required. But I don’t know how those different branch servers could be connected to central server.
My Questions:
• What is the best way to resolve above issues in given condition and successfully fulfill the company’s requirement? Faster and constant connection to database server. And also real-time updates between all branches and central server. If internet connection is down, then delay in real-time update is acceptable but clients should not be affected from work.
• Would migration from MySQL to MSSQL resolve the issue? Because data migration is not issue as there are many tools available which converts the database from one platform to other. But issue is - application is very huge having hundreds of query written for MySQL. I guess I have to change those all queries also, because queries are not same for MySQL and MSSQL. Do I have to change all the queries or just the few percentage queries? Or if there is any tool available which convert queries from MySQL to MSSQL query.
• In general, how such small-medium retail store company have their infrastructure and application setup? Let me know some ideas.

Run MySQL and PostgreSQL on same server

For our customer the application which is running is using MySQL database. However, this server is without monitoring. I want to install OpenNMS (which uses PostgreSQL) application to monitor the solution and send the traps to main NMS system.
Is there any problem having both on the same server?
No, there is no technical problem. Both default to different ports they listen on.
The only problem that could arise is that each individual DB might be slower compared to an installation on separate phyiscal machines because they are both share (and fight for) for the same resources (I/O, memory, CPU, network, ...)