Why my Google Compute Engine have a SLOW response time - google-compute-engine

I ran
ab -c 10 -n 1000 "MY_SITE_URL_HERE"
to test my server's response time, and it returns the longest request is 4000ms, which is unacceptable.
How can I diagnose the issue? Is it a software issue or hardware issue?
and How can I improve the server performance and reduce the server response time?
FYI:
My site uses Apache2.4.6 + PHP7 + MYSQL
and using
Google Compute Engine(1 vCPU, 1.7 GBram, Disk size 10GB) for the VM,
Cloud SQL(D1, 1st gen, 250GB)

Related

CbBackup Tool gets interrupted after 30s of inactivity for specific buckets in a cluster

I have a cluster with 4 buckets in it. Whenever I try to take backups for the buckets individually, two of the buckets get backed up while the other two do not get backed up.
For the buckets which do not get backed up, I got this message in the console.
w0 no response for 30 seconds while there 1024 active streams
This is the command I’m running for each bucket.
./cbbackup http://localhost:8091 /datadrive/cb-backups/ -u <USERNAME> -p '<PASSWORD>' -b <BUCKET_NAME> -m full
These are the specs for those two buckets which were not getting backed up.
Bucket 1 - 4GB RAM, currently has around 400,000 documents.
Bucket 2 - 4GB RAM, currently has around 150,000 documents.
It’s worth noting that we first had 2GB ram in both the buckets. After increasing the RAM for both the buckets, backups started working again but the same error occured from the next day
Is there an inherent problem with the CbBackup tool? Does anyone know how the backups are actually taken? That would give more insight into why this error might occur.
DISTILLED TECHNICAL INFORMATION:
Couchbase Server - Community Edition 5.0.1 build 5003
Command used - `./cbbackup http://localhost:8091 /datadrive/cb-backups/ -u <USERNAME> -p '<PASSWORD>' -b <BUCKET_NAME> -m full`
* Bucket Specs -
* Bucket 1 - 4GB RAM, currently has around 400,000 documents.
* Bucket 2 - 4GB RAM, currently has around 150,000 documents.
Thanks for your valuable time.

Slow Performance for multiple requests in RDS

I have started using AWS RDS MYSQL for my use-case.
I am using RDS to store the user uploaded server information from a website.
I have tested uploading 1 file with 500 records; took 20 seconds. Again for testing, I have uploaded 9 files simultaneously with a total of 500 records; took 55 seconds.
I don't know what is the possible reason for this? I am using db.t3.large instance.
Which RDS metric do I need to look after or which Performance Insights metrics?
Is this the issue due to lesser baseline I/O performance of the gp2 volume?
Check the following metrics:
Write IOPS (Count/Second)
Write Latency (Milliseconds)
Network Receive Throughput
Are you running the operation over the internet or within the VPC? There should be some network latency if you are using the internet.

Couchbase 4.0 Community Edition benchmark

We are benchmarking couchbase and observing a very strange behaviour.
Setup phase:
Couchbase cluster machines;
2 x EC2 r3.xlarge with General purpose 80GB SSD (Not EBS optimised ) , IOPS 240/3000.
Couchbase settings:
Cluster:
Data Ram Quota: 22407 MB
Index Ram Quota: 2024 MB
Index Settings (default)
Bucket:
Per Node Ram Quota: 22407 MB
Total Bucket Size: 44814 MB (22407 x 2)
Replicas enabled (1)
Disk I/O Optimisation (Low)
Each node runs all three services
Couchbase client;
1 x EC2 m4.xlarge General purpose 20 GB SSD (EBS Optimised), IOPS 60/3000.
The client is running the 'YCSB' benchmark tool.
ycsb load couchbase -s -P workloads/workloada -p recordcount=100000000 -p core_workload_insertion_retry_limit=3 -p couchbase.url=http://HOST:8091/pools -p couchbase.bucket=test -threads 20 | tee workloadaLoad.dat
PS: All the machines are residing within the same VPC and subnet.
Results:
While everything works as expected
The average ops/sec is ~21000
The 'disk write queue' graph is floating between 200K - 600K(periodically drained).
The 'temp OOM per sec' graph is at constant 0.
When things starting to get weird
After about ~27M documents inserted we start seeing 'disk write queue' is constantly rising (Not getting drained)
At about ~8M disk queue size the OOM failures are starting to show them selves and the client receives 'Temporary failure' from couchbase.
After 3 retries of each YCSB thread, the client stops after inserting only ~27% of the overall documents.
Even when the YCSB client stopped running, the 'disk write queue' is asymptotically moving towards 0, and is drained only after ~15 min.
P.S
When we benchmark locally on MacBook with 16GB of ram + SSD disk (local client + one node server) we do not observe such behaviour and the 'disk write queue' is constantly drained in a predictable manner.
Thanks.

Why is large MySQL database crashing app?

I have a webapp which I'm trying to deploy to OpenShift, which serves historical plots of market data. The MySQL database is around 600MB and has >12,000,000 rows over about 6 tables. If I deploy and run the app, it may run at most a couple minutes, but then it crashes with exceptions related to null database connections. I tried restarting mysql and looking at the quota (~750MB of 1024MB) to no avail. If I reduce the database size drastically, the webapp runs fine. So far a couple hours with no crashes. I also have another webapp on OpenShift that uses a small MySQL database that runs just fine.
From what I could tell from reading the documentation is that a gear provides up to 1 GB of storage, and I'm under that amount. If I lower my data footprint drastically, MySQL behaves fine. Any ideas regarding why with a 600MB database, the app fails and what limitations are causing failure? The webapp has also been running solidly on a VPS for about 2 years with no issues, so I'm pretty confident the problem lies within OpenShift's MySQL cartridge.
Edit: Additional Information
The MySQL cartridge is on a small gear and shared with the webapp.
The number of connections is 5 from a pool

MySQL InnoDB insertion is very slow

We use MySQL server 5.1.43 64-bit edition. InnoDB is used as engine.
We have a sql script which we execute every time we build the application.
On ubuntu machine with MySQL server and InnoDB engine it takes about 55 seconds to complete the execution.
If I run the same script on OSX, it takes close to 3 minutes!
Any ideas why OSX is so slow while executing this script?
You may want to try starting the server with my.ini changed
innodb_flush_log_at_trx_commit=2
and change back to
innodb_flush_log_at_trx_commit=1
for production usage.
I suspected that the fsync api in osx is slower than linux?
my crystal balls need more information.
its not the same executing a script against a db on the same machine , consider the network overhead. especially if the inserts are data intensive