How to increase root disk partition using ContainerOs within Google Cloud? [closed] - google-compute-engine

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
After resizing the disk, root partition did not take more space that is available.
When running
fdisk -l
on remote VM result is :
The backup GPT table is not on the end of the device. This problem will be corrected by write.
Disk /dev/sda: 64 GiB, 68719476736 bytes, 134217728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: XXXXXX-XXXXX-XXX-XXX-XXXX
Device Start End Sectors Size Type
/dev/sda1 8704000 67108830 58404831 27.9G Microsoft basic data
/dev/sda2 20480 53247 32768 16M ChromeOS kernel
/dev/sda3 4509696 8703999 4194304 2G ChromeOS root fs
/dev/sda4 53248 86015 32768 16M ChromeOS kernel
/dev/sda5 315392 4509695 4194304 2G ChromeOS root fs
/dev/sda6 16448 16448 1 512B ChromeOS kernel
/dev/sda7 16449 16449 1 512B ChromeOS root fs
/dev/sda8 86016 118783 32768 16M Microsoft basic data
/dev/sda9 16450 16450 1 512B ChromeOS reserved
/dev/sda10 16451 16451 1 512B ChromeOS reserved
/dev/sda11 64 16447 16384 8M BIOS boot
/dev/sda12 249856 315391 65536 32M EFI System
I saw lot of answers saying that I should use growpart command, but this command is not available and it seems that in containerOS you cannot install anything. I tried anyway yum, apt, apt-get, rpm without success.
I digged in Google Documentation, but did not find anything related with ContainerOS
The only workaround I found is to restart the VM, but is there any alternative that does not involve a restart ?

After resizing the disk, root partition did not take more space that is available.
I believe you are doing an online resizing of the disk, right? If so, you can reboot your Container-Optimized OS (COS) machine after the resize. After reboot, the filesystem will be automatically resized to fix your disk. What's happening behind the scene, is that everytime COS boot up, resize-stateful-partition.service will handle this logic for you.
If you cannot easily reboot the COS VM, you could try run sudo /usr/share/cloud/resize-stateful, which should also work.

Container-Optimized OS is an operating system image for your Compute Engine VMs that is optimized for running Docker containers. Container-Optimized OS is maintained by Google and is based on the open source Chromium OS project. This explains why you cannot use growpart or install it.
You could try this Chrome OS resizer posted HERE but I don’t recommend you to modify Google managed systems as it could cause other issues in the future

Related

Why is MySQL consuming so much memory?

I have mysql 5.6.36 database where the size is ~35G running on CentOS 7.3 with 48G of RAM.
[UPDATE 17-08-06] I will update relevant information here.
I am seeing that my server runs out of memory and crashes even with ~48G of RAM. I could not keep it running on 24G, for example. A DB this size should be able to run on much less. Clearly, I a missing something fundamental.
[UPDATE: 17-08-05] By crashes, I mean mysqld stops and restarts with no useful information in the log, other than restarting from a crash. Also, with all this memory, I got this error during recovery:
[ERROR] InnoDB: space header page consists of zero bytes in tablespace ./ca_uim/t_qos_snapshot.ibd (table ca_uim/t_qos_snapshot)
The relevant portion of my config file looks like this [EDITED 17-08-05 to add missing lines]:
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
lower_case_table_names = 1
symbolic-links=0
sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
max_allowed_packet = 32M
max_connections = 300
table_definition_cache=2000
innodb_buffer_pool_size = 18G
innodb_buffer_pool_instances = 9
innodb_log_file_size = 1G
innodb_file_per_table=1
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
It was an oversight to use file per table, and I need to change that (I have 6000 tables, and most of those are partitioned).
After running for a short while (one hour), mytop shows this:
MySQL on 10.238.40.209 (5.6.36) load 0.95 1.08 1.01 1/1003 8525 up 0+01:31:01 [17:44:39]
Queries: 1.5M qps: 283 Slow: 22.0 Se/In/Up/De(%): 50/07/09/01
Sorts: 27 qps now: 706 Slow qps: 0.0 Threads: 118 ( 3/ 2) 43/28/01/00
Key Efficiency: 100.0% Bps in/out: 76.7k/176.8k Now in/out: 144.3k/292.1k
And free shows this:
# free -h
total used free shared buff/cache available
Mem: 47G 40G 1.5G 8.1M 5.1G 6.1G
Swap: 3.9G 508K 3.9G
Top shows this:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2010 mysql 20 0 45.624g 0.039t 9008 S 95.0 84.4 62:31.93 mysqld
How can this be? Is this related file per table? The entire DB could fit in memory. What am I doing wrong?
In your my.cnf (MySQL configuration) file:
Add a setting in [mysqld] block
[mysqld]
performance_schema = 0
For MySQL 5.7.8 onwards, you will have to add extra settings as below:
[mysqld]
performance_schema = 0
show_compatibility_56 = 1
NOTE: This will cut your Memory usage to more than 50%-60%. "show_compatibility_56" is optional, for some cases it works, better to check it once added to the config file.
Well, I resolved the issue. I appreciate all the insight from those who responded. The solution is very strange, and I cannot explain why this solves the problem, but it does. What I did was add the following line to my.cnf:
log_bin
You may, in addition, need to add the following:
expire_logs_days = <some number>
We have seen at least one instance where the logs accumulated and filled up a disk. The default is 0 (no auto removal). https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_expire_logs_days
Results are stored and fed from memory and given that you're running 283 per second, there's probably a lot of data at any given moment being dished out.
I would think that you are doing a good job squeezing a lot out of that server. Consider the tables are one thing, then the schema involved for 6000 tables, plus the fact that you're pulling 283 queries per second against a 35 GB database and that those results are held in memory while they are being served. The rest of us might as well learn from you.
Regarding the stopping and restarting of MySQL
[ERROR] InnoDB: space header page consists of zero bytes in tablespace ./ca_uim/t_qos_snapshot.ibd (table ca_uim/t_qos_snapshot)
Your might consider trying
innodb_flush_method=normal which is recommended here and here, but I can't promise it will work.
I would check table_open_cache. You have a lot of tables and it is clearly reflected in avg opened files per second: about 48 when a normal value is between 1 and 5.
That is confirmed by the values of Table_open_cache_misses and Table_open_cache_overflows,
ideally those values should be cero. Those means failed attempts to use cache and in consequence wasted memory.
You should try increasing it at least to 3000 and see results.
Since you are on CentOS:
I would double check that ulimit it is unlimited or about 20000 for your 6000 tables.
Consider set swappiness to 1. I think it is better to have some swapps (while observing) than crashes.
Hoping you are a believer in ONLY one change at a time so you can track progress for a configuration reason. 2017-08-07 about 17:00 SHOW GLOBAL VARIABLES indicates innodb_buffer_pool_size is 128M. Change in my.cnf to 24G, shutdown/restart when permitted, please.
A) max_allowed_packet_size at 1G is likely what you meant in your configuration, considering on 8/7/2017 your remote agents are sending 1G packets for processing on this equipment. How are remote agents managed in terms of scheduling their sending of data to prevent exhausting all 48G on this host for this single use of memory? Status indicates bytes_received on 8/6/2017 was 885,485,832 from max_used_connections of 86 in first 1520 seconds of uptime.
B) innodb_io_capacity at 200 is likely a significant throttle to your possible IOPS, we run here at 700. sqlio.exe utility was used to guide us in this direction.
C) innodb_io_capacity_max should be likely be adjusted as well.
D) thread_cache_size of 11, consider going to 128.
E) thread_concurrency of 10, consider going to 30.
F) I understand the length of process-list.txt in the number of Sleep ID's is likely caused by the use of persistent connections. The connection is just waiting for some additional activity from the client for an extended period of time. 8/8/2017
G) STATUS Com_begin count is usually very close to Com_commit count, not in your case. 8/8/2017 Com_begin was 2 and Com_commit was 709,910 for 11 hours of uptime.
H) It would probably be helpful to see just 3 minutes of a General Log, if possible.
Keep me posted on your progress.
Please enable the MySQL error log in your usual configuration.
When MySQL crashes, protect the error log before restarting, and add last error-log available to your Question, please. It should have a clue WHY MySQL is failing.
Running the 'small' configuration will run like a dog, when supporting the volume of activity reported by SHOW GLOBAL STATUS.
Please get back to your usual production configuration.
I am looking at your provided details and will have some tuning suggestions in next 24 hours. It appears most of the process-list activities are related to replication. Would that be true?
Use of www.mysqlcalculator.com would be a quick way to get a brain check on about a dozen memory consumption factors in less than 2 minutes.
118 active threads may be reasonable but would seem to be causing extreme context switching trying to answer 118 questions simultaneously.
Would love to see your SHOW GLOBAL STATUS and SHOW GLOBAL VARIABLES, if you could get them posted.

amazon aws are very very slow, mysql and apache eat a lot of memeory [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am using amazon aws ec2, its very very slow. I don't know where is wrong.
I am using free and top command, and found mysql and apache using lots of memory.
Here is top -M
I found that apache and mysql eat more.
Here is apache info:
[ec2-user#www ****]$ httpd -v
Server version: Apache/2.4.6 (Amazon)
Server built: Sep 20 2013 18:01:06
Mysql info:
Server version: 5.5.34 MySQL Community Server (GPL)
I am not modify any mysql & apache configuration file , what should I do next ?
Any suggestion is welcome.
You haven't shown us any reason to think this is slow. What you have shown us is that almost all of your memory is free and that your CPUs, from the picture, are idle.
When analyzing memory usage, remember that "cached" memory is used to hold things that were retrieved from the hard drive. Rather than free that memory and then waste it, the linux kernel smartly leaves the data in memory, clearing it out when necessary. This is a good thing - it means that many things, eg mysql data files, live in memory whenever possible. Caching files in memory is a great thing! You actually have over 6 gigs free.
The VIRT memory in top is basically meaningless - ignore it. See https://serverfault.com/questions/138427/top-what-does-virtual-memory-size-mean-linux-ubuntu What you should be looking at is RES - and 256M for mysql and a few dozen megs for Apache and mod-php is pretty nominal. In fact, for the server you appear to have ( Large with 7 gigs of memory) if you have more than 200 megs or so of data in your database ,you should probably use more of it for mysql, as it would yeild great performance gains.
If your site is slow, it's not because of memory.
A few questions to consider to determine likely performance bottlenecks:
what instance type are you using?
is it Ephemeral backed, or EBS Backed?
do you see spikes in Wait time when the server is being "slow"?
What php application are you running?
AWS instances often are underpowered for their memory and ephemeral storage. For example large instances have 7 gigs of ram, but only 2, 2ghz circa 2007 processors. So they're not fast by any means. The c1 and m3 line of instances do much to improve on this.

Percona 5.6 high memory usage

I'm not sure if stack is the right place to ask this, but I recently upgraded to Percona 5.6 from 5.5 and my memory usage has skyrocketed!
This is from PS:
mysql 4598 0.0 29.5 1583356 465312 ? Sl Oct17 9:07 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib6
I'm on a dedicated VSS
My server only has a gig of ram...how is this only memory usage only 30% according to PS?
I have my ram set in the config to be less than this, and when I run MySQLTuner i get:
[OK] Maximum possible memory usage: 338.9M (22% of installed RAM)
So how am I using almost 500MB of physical memory and over a gig and a half of virtual?
Is this a bug in mySQL or something with my server?
found out that in mysql 5.6 performance_schema is on by default. It was disabled by default prior in 5.5 and before. It has been enabled by default since 5.6.6
performance_schema=off to my config file fixes the issue.
I'd imagine anyone who doesn't have the memory to be running performance_schema would not be using it anyway.
This might affect other distros of mysql 5.6.6 as well.
I had this problem and fixing a few (increased) cache values in MySQL.ini sorted the problem out.
table_definition_cache - set to 400
From "http://bugs.mysql.com/bug.php?id=68287" where this is discussed
Yes, there are thresholds based on table_open_cache and table_definition_cache and max_connections and crossing the thresholds produces a big increase in RAM used. The thresholds work by first deciding if the server size is small, medium or large.
Small: all three are same as or less than defaults (2000, 400, 151).
Large: any of the three is more than twice the default.
Medium: others.
From memory. mine was set to 2000+ and dropping it sorted the problem.
What helped me was on CentOS was changing the memory allocator:
yum install jemalloc-devel
and add to my.cnf:
[mysqld_safe]
malloc-lib = /usr/lib64/libjemalloc.so.1

MYSQL Memory Usage [duplicate]

This question already has answers here:
MySQL maximum memory usage
(7 answers)
Closed 9 years ago.
I have MySQL installed on a VPS Windows 2008 Web Server with 1GB of memory.
Is there a way of setting a maximum memory usage limit for MYSQL.
I have workbench installed if it can be done through that.
Many Thanks
John
If you really want to impose a hard limit, you could do so, but you'd have to do it at the OS level as there is no built-in setting. In linux, you could utilize ulimit, but you'd likely have to modify the way MySQL starts in order to impose this.
The best solution is to tune your server down, so that a combination of the usual MySQL memory settings will result in generally lower memory usage by your MySQL installation. This will of course have a negative impact on the performance of your database, but some of the settings you can tweak in my.ini are:
key_buffer_size
query_cache_size
query_cache_limit
table_cache
max_connections
tmp_table_size
innodb_buffer_pool_size
To Plot memory usage in Linux you can easily use a script
while true
do
date >> ps.log
ps aux | grep mysqld >> ps.log
sleep 60
done
if you are trying to Check for Table Cache Related Allocations
Run “FLUSH TABLES” and see whenever memory usage goes down.
Note though because of how memory is allocated from OS you might not
see “VSZ” going down. What you might see instead is flushing tables
regularly or reducing table cache reduces memory consumption to be
withing the reason.
It is often helpful to check how much memory Innodb has allocated. In fact this is often one of the first things I do as it is least intrusive. Run SHOW ENGINE INNODB STATUS and look for memory information block

Increasing the number of simultaneous request to mysql

Recently we changed app server of our rails website from mongrel to passenger [with REE and Rails 2.3.8]. The production setup has 6 machines pointing to a single mysql server and a memcache server. Before each machine had 5 mongrel instance. Now we have 45 passenger instance as the RAM in each machine is 16GB with 2, 4 core cpu. Once we deployed this passenger set up in production. the Website became so slow. and all the request starting to queue up. And eventually we had to roll back.
Now we suspect that the cause should be the increased load to the Mysql server. As before there where only 30 mysql connection and now we have 275 connection. The mysql server has the similar set up as our website machine. bUt all the configs were left to the defaul limit. The buffer_pool_size is only 8 mb though we have 16GB ram. and number of Concurrent threads is 8.
Will this increased simultaneous connection to mysql would have caused mysql to respond slowly than when we had only 30 connections? If so, how can we make mysql perform better with 275 simultaneous connection in place.
Any advice greatly appreciated.
UPDATE:
More information on the mysql server:
RAM : 16GB CPU: two processors each having 4 cores
Tables are innoDB. with only default innodb config values.
Thanks
An idle MySQL connection uses up a stack and a network buffer on the server. That is worth about 200 KB of memory and zero CPU.
In a database using InnoDB only, you should edit /etc/sysctl.conf to include vm.swappiness = 0 to delay swapping out processes as long as possible. You should then increase innodb_buffer_pool_size to about 80% of the systems memory assuming a dedicated database server machine. Make sure the box does not swap, that is, VSIZE should not exceed system RAM.
innodb_thread_concurrency can be set to 0 (unlimited) or 32 to 64, if you are a bit paranoid, assuming MySQL 5.5. The limit is lower in 5.1, and around 4-8 in MySQL 5.0. It is not recommended to use such outdated versions of MySQL in a machine with 8 or 16 cores, there are huge improvements wrt to concurrency in MySQL 5.5 with InnoDB 1.1.
The variable thread_concurrency has no meaning inside a current Linux. It is used to call pthread_setconcurrency() in Linux, which does nothing. It used to have a function in older Solaris/SunOS.
Without further information, the cause for your performance problems cannot be determined with any security, but the above general advice may help. More general advice geared at my limited experience with Ruby can be found in http://mysqldump.azundris.com/archives/72-Rubyisms.html That article is the summary of a consulting job I once did for an early version of a very popular Facebook application.
UPDATE:
According to http://pastebin.com/pT3r6A9q , you are running 5.0.45-community-log, which is awfully old and does not perform well under concurrent load. Use a current 5.5 build, it should perform way better than what you have there.
Also, fix the innodb_buffer_pool_size. You are going nowhere with only 8M of pool here.
While you are at it, innodb_file_per_table should be ON.
Do not switch on innodb_flush_log_at_trx_commit = 2 without understanding what that means, but it may help you temporarily, depending on your persistence requirements. It is not a permanent solution to your problems in any way, though.
If you have any substantial kind of writes going on, you need to review the innodb_log_file_size and innodb_log_buffer_size as well.
If that installation is earning money, you dearly need professional help. I am no longer doing this as a profession, but I can recommend people. Contact me outside of Stack Overflow if you want.
UPDATE:
According to your processlist, you have very many queries in state Sending data. MySQL is in this state when a query is being executed, that is, the main interior Join Loop/Query Execution loop is busy. SHOW ENGINE INNODB STATUS\G will show you something like
...
--------------
ROW OPERATIONS
--------------
3 queries inside InnoDB, 0 queries in queue
...
If that number is larger than say 4-8 (inside InnoDB), 5.0.x is going to have trouble. 5.5.x will perform a lot better here.
Regarding the my.cnf: See my previous comments on your InnoDB. See also my comments on thread_concurrency (without innodb_ prefix):
# On Linux, this does exactly nothing.
thread_concurrency = 8
You are missing all innodb configuration at all. Assuming that you ARE using innodb tables, you are not performing well, no matter what you do.
As far as I know, it's unlikely that merely maintaining/opening the connections would be the problem. Are you seeing this issue even when the site is idle?
I'd try http://www.quest.com/spotlight-on-mysql/ or similar to see if it's really your database that's the bottleneck here.
In the past, I've seen basic networking craziness lead to behaviour similar to what you describe - someone had set up the new machines with an incorrect submask.
Have you looked at any of the machine statistics on the database server? Memory/CPU/disk IO stats? Is the database server struggling?