Wordpress setup latency on Azure - mysql

I have a Wordpress environment setup on Azure.
Front end is on WebApp (Size is S2 - 2 cores & 3.5 GB RAM) whilst DB is on 2 replicated Classic Virtual Machines (Size F2 - 2 cores / 4 GB Memory).
We also tried connecting the web app to the VMs over a point-to-site VPN which in a nutshell is a VPN from 1 Azure service (WebApp) to another (VMs), so ultimately connection is still being made over the internet.
I'm looking for ways to improve network latency between Azure's WebApp and Virtual Machines.

Firstly, If your trying to "improve" the network latency then you have a issue somewhere else. Please provide more details on your latency issue.
You should be pushing towards the ARM stuff now. If you want to improve the performance then you can try using azure service fabric.

Related

Set up mysql separately from servers

I have a direct admin server that hosts 8 sites
These sites have high traffic and a lot of connection with the database, and when add-ons are implemented for optimization, etc., our database resources reach 12 GB and cause disruption and slowness.
I plan to install a mysql local server and separate the database from the direct admin server.
Does this make sense? If yes, is there a training link for this?

Data transfer between local server and AWS server

We have an ASP.NET MVC 5 web application that reads data locally from within the same server. This server is in Europe. However when trying to read the same data from an AWS server based in Sidney the lag is many times greater. A ping from our local server to the AWS server in Australia takes 5 seconds. The data needs to be located in Australia because of data protection laws issued by the Australian Government. The database is MySQL. We have created a VPN between both servers and made no difference.
What are our options in order to improve the speed between these two servers?
If it is a web application serving content to users over internet you can use CloudFront distribution to reduce your latency issues.
https://aws.amazon.com/cloudfront/
If you are trying to connect your servers from your data center to AWS
Use AWS Direct Connect, this will provide a dedicated link between your on-premise datacenter and to the AWS Servers; Decreasing your latency by a lot.
https://aws.amazon.com/directconnect/
AWS runs your application regardless of which platform(ASP.NET, JAVA, C...) it is, AWS only provisions infrastructure. You don't need to worry about the platform on which your application is running and what database it connects to. You just need to ensure that all the network connections are properly open so that your servers can communicate with AWS servers.

Google Compute Instance 100% CPU Utilisation

I am running n1-standard-1 (1 vCPU, 3.75 GB memory) Compute Instance , In my android app around 80 users are online write now and cpu Utilisation of instance is 99% and my app became less responsive. Kindly suggest me the workaround and If i need to upgrade , can I do that with same instance or new instance needs to be created.
Since your app is running already and users are connecting to it, you don't want to do the following process:
shut down the VM instance, keeping the boot disk and other disks
boot a more powerful instance, using the boot disk from step (1)
attach and mount any additional disks, if applicable
Instead, you might want to do the following:
create an additional VM instance with similar software/configuration
create a load balancer and add both the original and new VM to it as a backend
change your DNS name to point to the load balancer IP instead of the original VM instance
Now, your users will be randomly sent to a VM that's least-loaded to see the application, and you can add more VMs if your traffic increases.
You did not describe your application in detail, so it's unclear if each VM has local state (e.g., runs a database) or there's a database running externally. You will still need to figure out how to manage the stateful systems such as database or user-uploaded data from all the VM instances, which is hard to advise on given the little information in your quest.

aws - needed requirements - windows - sqlserver - webapp

I'm a totally newbie with cloud computing, but I want to try it.
I want to relocate a public app developed by myself and currently served by a traditional hosting.
The requirements are just the basics.
- Windows Server 2008 R2 or later
minimal ram
.net 2.0 and .net 4.0 support
IIS
SQL Server 2008
Some 20gb, for the app, db, and files
Some 10gb more (DB backup)
With win2 + sql server I can't use the free testing. I know that and I'm ready to pay the platform needed. In the current hosting I'm paying for that too.
I just want see if someone can validate my configuration and say if I'm forgetting something:
1 instance EC2 Windows and Std. SQL Server m4.large
1 Amazon EBS Volumes 100gb, 3IOPS, 30% snapshots
1 Elastic IP for the unique instance
5GB Transfer in, out, interdata
http://calculator.s3.amazonaws.com/index.html#r=IAD&s=EC2&key=calc-D29B73A3-A7C5-4CEE-9C4F-3ED5A74D2420
Basically, the server will run the web app who has connection with the database installed in the same machine (maybe in the future with more visits the db will be moved to another vm). The web app will be public access.
Someone see something missing?
Should I buy amazon s3 storage for the backups?
Where will be saved the ebs volume backups? or with my configuration I can't have backups?
At the moment I'll keep using my own dns server and I hope be capable to configure for redirect to aws without need of amazon route 53.
at are you seeing, I need some orientation, because I'm a little lost in this new world. What services I need? what about the backups?
I'm not thinking at this moment in optimization (cache, load balance, ...). If the things go in the right way, in the future. Right now I just want the most simply installation. My problem, windows and sql, those can't be used in free version and I can't change the app.
I feel shame for be so annoying, but I still have those doubts.
There are a lot of pieces to this. I will try and break some of it down.
Should I buy amazon s3 storage for the backups? Where will be saved
the ebs volume backups? or with my configuration I can't have backups?
You can script a backup of the drive to S3, or back up the volume entirely as a 'volume'. Totally up to you. Backing up contents of the drive to S3 with a proper script (like this) would be a great idea.
At the moment I'll keep using my own dns server and I hope be capable to configure for redirect to aws without need of amazon route 53.
Route 53 is amazing and an extremely small charge for the flexibility it gives. I would recommend using it because it ties in neatly to other AWS services you might need (like load balancers) in the future.
I just want see if someone can validate my configuration and say if I'm forgetting something:
1 instance EC2 Windows and Std. SQL Server m4.large
1 Amazon EBS Volumes 100gb, 3IOPS, 30% snapshots
1 Elastic IP for the unique instance
5GB Transfer in, out, interdata
With an m4.large, you get 2 VCPUs and 8gb of memory, and 450mbps throughput. Do you need all that horsepower? You can try a t2.medium if that sounds like a lot for some burstable cpu performance. It all depends on what utilization you're at now, and if you're CPU/memory/bandwidth bound.
Feel free to ask more questions.
For backups of EBS volumes to S3, the best way to backup would be us EBS snapshots.

Development Environment for Testing MySQL Replication

Is there an easy way to setup an environment on one machine (or a VM) with MySQL replication? I would like to put together a proof of concept of MySQL replication with one Master write instance and two slave instances for reads.
I can see doing it across 2 or 3 VMs running on my computer, but that would really bog down my system. I'd rather have everything running on the same VM. What's the best way to proof out scalability solutions like this in a local dev environment?
Thanks for your help,
Dave
I think to truly test MySQL Replication it is important to do so in realistic constraints.
If you put all the replicate nodes under one operating system then you no longer have the bandwidth constraint, the data transfer speed would be much higher that what you would get if those replicate DBs are on different sites.
Everything under one VM is a shortcut to configurations, for instance it does not make you go through the configuration of the networking.
I suggest you use multiple VMs, even if you have to put them under one physical machine, you can always configure the hypervisor to make the packets go through a router, in which case the I/O will be bound by whatever the network interface has as throughput.
I can see doing it across 2 or 3 VMs
running on my computer, but that would
really bog down my system.
You can try and make a few VMs with JeOS (Just Enough OS) versions of the operating system you want. I know Ubuntu has one and it can boot on 128 RAM, which makes it convenient to deploy lots of cloned VMs under one physical machine without monster RAM.
Next step would be doing the same thing on a cloud (Infrastructure as a Service, IaaS) provider, and try your setup on different geographical sites.
If what you're testing is machine-to-machine replication, then setting up multiple VMs on a virtual private network would be the correct environment to test it. If you use Ubuntu Server, you don't have to install more than you actually need -- just give the VMs enough space for a base install + MySQL + your data. Memory usage can be as little as 256MB per VM. All you have to do is suspend or shutdown the VMs when you're not running a full-up test.
I've had situations where I was running 4 or more VMs simultaneously on my workstation, either for development or testing purposes -- it's not that taxing unless you're trying to do video rendering in each VM.