I just started using git to manage my website's distribution. My site is running a lighttpd server and a Symfony 2 PHP framework. It connects to a remote MySQL server. When I cloned my project to my mac (running Apache), the site still works but it's extremely slow. The problem seems to be with the mysql connection. Performing just a few extra queries (10 or so) results in significantly longer page load time. The remote server that hosts my site runs just fine, it's way faster than my local copy.
What are some common causes of a slowdown like this?
Firstly I'd take a look at the "just a few extra queries" to see if those are taking an unreasonable amount of time.
I'm assuming that you are still connecting to the remote MySQL server on your local copy? If this is the case the problem could be bandwidth. Rented servers are usually on 100Mbit+ connections, so the data transfer will be relatively quite compared to your broadband at home.
When your web app is running on your local copy try running a SHOW PROCESSLIST; on the MySQL Server.
Finally how powerful is your mac compared to the server? If you mac is underpowered and your trying to also run photoshop + illustrator + itunes etc etc this will make a difference.
Related
I'm struggling with finding out how to properly test stuff on my local PC and then transfer that over to production.
So here is my situation:
I got a project in NodeJS/typescript, and I'm using Prisma in it for managing my database. On my server I just run a MySQL database, and for testing on my PC I always just used SQLite.
But now that I want to use Prisma Migrate (because it's highly recommended to do so in production) I can't because I use different databases on my PC vs on my Server. Now here comes my question, what is the correct way to test with a database during development?
Should I just connect to my server and make a test database there? Use VS Code's SSH coding function to code directly on the server and connect to the database? Install MySQL on my PC? Like, what's the correct way to do it?
Always use the same brand and same version database in development and testing that you will eventually deploy to. There are compatibility differences between brands, i.e. an SQL query that works on SQLite does not necessarily work the same on MySQL, and vice-versa. Even data types and schema definitions aren't all the same between different SQL products.
If you use different SQL databases in development and production, you will waste a bunch of time and increase your gray hair debugging problems in production, as you insist, "it works on my machine."
This is avoidable!
When I develop on my local computer, I usually have an instance of MySQL Server running in a Docker container on my laptop.
I assume any test data on my laptop is temporary. I can easily recreate schema and data at any time, using scripts that are checked into my source control repo, so I don't worry about losing any data. In fact, I feel no hesitation to drop it and recreate it several times a week.
So if I need to upgrade the local database version to match an upgrade on production, I just delete the Docker container and its data, pull the new Docker image version, initialize a new data dir, and reload my test data again.
Every step is scripted, even the Docker pull.
The caveat to my practice is that you can't necessarily duplicate the software if you use cloud databases, for example Amazon Aurora. There's no way to run an Aurora-compatible instance on your laptop (and don't believe the salespeople that Aurora is fully compatible with MySQL; it's not). So you could run a small Aurora instance in a development VPC and connect to that from your app development environment. At least if your internet connection is reliable enough.
By the way, a similar rule applies to all the other technology you use in development. The version of Node.js, Prisma, other NPM dependencies, http and cache servers, etc. Even the operating system might be the source of compatibility issues, but you may have to develop in a Virtual Machine to match the OS to production exactly.
At one past job, I did help the developer team create what we called the "golden image" which was a pre-configured VM with all our software dependencies installed, and we used this golden image for both the developer sandbox VM, and also an AMI from which we launched the production Amazon EC2 instances. So all the developers were guaranteed to have a test environment that matched production exactly. After that, if they had code problems, they could fix it in development and have a much higher confidence it would work after deploying to production.
I got a docker compose file that has about 6 different services. A couple API services running in both ruby and PHP.
A web server.
And a couple backend processing services.
Normal setup -- for a couple weeks I was running this while using a local MySQL server. All was snappy and fast. We decided to buy an AzureDB so while developing we could all be on the "same database".
Now each call to the API service takes 3 to 5 seconds up from < half a second.
I am suspecting this is because of the networking setup when using compose -- is there an easy way around this slowness? It's horrible!
I am currently running a virtualized environment for my web and db server. When I access the web server or the MySQL server individually, they are both fast. I also have websites running on the web server that do not require the db server and those all load quickly. However, when I access my hosted website that requires the web server to call from the db server, there is about a 5-7 second latency for every page load. This has been confirmed with both a very simple site and with a Word Press setup as well. Here is the config:
Web server - CentOS 6.5, Apache 2.2.15
DB server - CentOS 6.5, MySQL 5.1.73
My question is, are the servers continuously authenticating with one another (and thus causing latency) on every single db call? If that is the case, does anyone know how to permanently authenticate between the two?
I might be way off on this assumption and authentication could have nothing to do with it. I am completely open to any and all ideas at this point. Thank you very much.
V/R,
Tony
To me it seems to be a network issue.
and obviously the db-server will need authentication every time there is a hit.
For our customer the application which is running is using MySQL database. However, this server is without monitoring. I want to install OpenNMS (which uses PostgreSQL) application to monitor the solution and send the traps to main NMS system.
Is there any problem having both on the same server?
No, there is no technical problem. Both default to different ports they listen on.
The only problem that could arise is that each individual DB might be slower compared to an installation on separate phyiscal machines because they are both share (and fight for) for the same resources (I/O, memory, CPU, network, ...)
I've developed an application using the Microsoft Sync Framework 2.1 SDK and my current deployment method has been:
Make a backup of the unprovisioned database from a development machine and restore it on the server.
Provision the server followed by provisioning the client
Sync the databases
Take a backup of the synced database on the development machine and use that for the client installations. It is included in an InstallShield package as an SQL/Server backup that I restore on the client machine.
That works but on the client machine now I would also like to create a seperate test database using the same SQL/Server backup without doubling the size of the installation. That also works but of course because the client test version is no longer synced with the test version on the server it attempts to download all records which takes many hours over slower Internet connections.
Because integrity of the test database is not critical I'm wondering if there's a way to essentially mark it as 'up to date' on the client machine without too much network traffic?
After looking at the way the tracking tables work I'm not sure this is even possible without causing other clients to either upload or download everything. Maybe there is an option to upload only from a client that I've missed? That would suit this purpose fine.
Everytime you take a backup of a provisioned database and restore it to initialize another client or replica, make sure you run PerformPostRestoreFixup after you restore and before you sync it for the first time.
After further analysis of the data structures used by Sync Framework I determined there would be no acceptable way to achieve the result I was seeking without sending a significant of data between the client and server that would have approached what was required to do a 'proper' sync.
Instead I ended up including a seperate test database backup along with the deployment so that the usual PerformPostRestoreFixup could be performed followed by a sync in the normal manner the same as I was handling the live database.