We have 1 GB RAM - 2 Core CPU with Ubuntu
Website is new so very less traffic. But storage is increasing #1% daily which is alarming. Could you please evaluate this SQL Tuner Report and suggest the configuration pls. It's for an e-commerce website.
[https://pastebin.com/tVw80PKu][1]
1% per day compounded leads to about 40x per year. You have 23.5MB of data now; that would be about 900MB after a year. How much disk do you have? If you have a few GB, what is the problem?
RAM is used as a cache, so you do not have to have enough RAM for the entire dataset. (Performance may suffer.)
Related
I am running Aurora MySql Engine version 5.6.10a in the production environment. Automated DB snapshot size on 9th May was 120 GB and this snapshot size increased by 27 GB to 147 GB. I have checked that DB size did not increase even by 1 GB. I looked on the internet for the reason why this happened but got nothing.
Graph for snapshot size for the last two weeks as:
It's pretty consistent till 9th May and after 10th May. Does anyone have insight into this issue?
The rate of increase in DB size:
VolumeBytesUsed Graph:
Your help will be much appreciated!!
Thanks,
Mayurkumar
There is bug in aurora storage provisioning , we have also faced the same issue for one of our client.
In our case the storage grew from 17gb to 22000 gb in 15 days where db size was just 16gb. We were charged a lot for that period , you can connect to aws support for the resolution of this problem.
How were you checking you snapshot size? One possibility is that may have configured daily backups or so, which might be the reason why your total backup size is growing. Would love to help out more if you can provide more details on the number of snapshots, how you were checking the size of the cluster and the snapshot etc.
I am currently using a t2 micro mysql instance to store a data in AWS. I have a lambda function that is running every 2 minutes and inserting / updating a total of 3k rows. Data is being inserted and updated pretty frequently and the only time data is really being read (besides me running test queries from time to time) is once a day at 3AM where i am running 5 queries to generate 5 csv files based on the previous days data.
I can see my freeable memory dropping, and within the past day it has dropped below 100mb and swap usage has gone up from about 5mb to 15mb. I've read some other SO threads about freeable memory, but none of them have really answered the question I have. My question is how important is freeable memory if I am primarily doing writes to the DB. I am planning on upgrading to the next tier soon, but I'd like to know how important this is and if I should consider making sure that the freeable memory is always above a certain threshold. My DBA experience is slim to none so I'm don't have too much knowledge about what is going on under the hood and if it is stuff i need to worry about
we have a 4 node cluster, with 24 GB RAM, out of which 18GB has been given to couchbase with zero replicaion.
We have a approx 10M of records in this cluster with ~2.5M/hour and expire old items.
My RAM Usage which is ~72GB is getting full every ~12 days, and i need to restart the cluster to fix this. After restart again the RAM usage is back to ~20GB.
Can someone please help to understand the reason for it.
FYI : Auto Compaction is set to 40% fragment level and Meta Data Purge Interval is set to 1 Day, -- which we reduced to do 2 hours. But it didn't help.
Under scenarios with very high memory allocation churn Couchbase can experience memory fragmentation, which would cause the effects you are describing. This was addressed in the 4.x release by switching to jemalloc on non-Windows OSes and using tcmalloc with agressive decommit on Windows. I would suggest you download the RC version of Couchbase 4 (http://www.couchbase.com/nosql-databases/downloads#Couchbase_Server) and give it a try to see if that fixes the issue.
I have an instance of MySQL Server 5.6.20 running on Windows Server 2012. One table in particular in my database is very large (23 GB on disk, 31 million rows). When I query this table, even for simple operations such as a count(*), the performance is terrible, frequently taking as long as 40 minutes to complete.
Checking the resource monitor, I see my Highest Active Time pinned at 100%, but only reading 1.5-2.0 MB per second from the disk (much below the peak performance of the drive). Internet research suggests this happens when reading highly fragmented files from disk, but the only file being read is the MySQL InnoDB database file. Am I interpreting this right that the data file itself is heavily fragmented? Is there an SQL specific solution or is windows defrag the correct approach to this problem?
EDIT
There are two Dell PERC H310 SCSI 1.8 TB Disks in the machine. Only one is formatted. RAID was never setup. No SSDs are installed.
We run a server at work using a program called Shipworks, we are an ecommerce company, so aids us in shipping our orders.
We have been having intermittent issues with latency in shipping labels printing and searches through the program(which uses a sql database) when all our users are on. We have between 8 - 12 users actively on shipworks.
Our server has 8gb of RAM and a quad core processor. I was using new relic to monitor the server to determine the issue and it looks like memory amounts are going beyond where they should be.
Screenshot: http://tinypic.com/r/2j5bga0/5
My memory is staying at a constant 8600 mb of system swap Ram and 5400 MB of Used RAM. The server only has 8gb of RAM but this sounds like it is using around 14gb I know there is virtual RAM but there has to be something wrong here. If anyone can help, it'd be much appreciated
It turns out that what we needed was the upgrade from sql express to standard. We just got it up yesterday and everything is going great now. Thanks guys