How to increase MEMORY_TARGET on oracle? - configuration

After shutting down oracle database when I was trying to startup it was showing "Specified value of MEMORY_TARGET is too small, needs to be at least 424M"
How can I increase the MEMORY_TARGET?
Thanks

Related

Is it possible to bypass the limitation on MySQL Azure?

I am using Azure MySQL version 5.6
When i am trying to import a large MySQL dump file from linux enviroment into the PaaS of Azure (Azure Database for MySQL servers) using this command :
pv DBFILE.sql | mysql -u username#mysqlserver -h
mysqlservername.mysql.database.azure.com -pPassword DBNAME
I am getting this message:
"The size of BLOB/TEXT data inserted in one transaction is greater
than 10% of redo log size. Increase the redo log size using
innodb_log_file_size."
Is there any way to bypass this error?
I read on the Microsoft documentation that "innodb_log_file_size" is not configurable. Can i split this large dump file into smaller ones and import them all together? Does it make any difference?
The size of the dump file is not the problem. It won't help to split it up.
The problem is that the size of one BLOB or TEXT value on at least one row is greater than 1/10th the size of the innodb log file. You can't split the data to be less than a single BLOB or TEXT value.
According to the Azure documentation you linked to, the value of innodb_log_file_size is fixed at 256MB. This means you cannot import any row with a BLOB or TEXT value of more than 25.6MB. At least you can't import it to an InnoDB table.
The reason is that the redo log file has a fixed size, and the size of the log file creates a limit on the number of modified pages in the InnoDB buffer pool (not one-to-one though, because the format of redo log file records is not the same as pages in the buffer pool). It's kind of an arbitrary ratio, but the purpose of the limit on BLOB/TEXT values is meant to avoid a giant BLOB wrapping around and overwriting part of itself in a small redo log, which would leave the MySQL server in a state that could not recover from a crash. In MySQL 5.5, this was just a recommended limit. In MySQL 5.6, it became enforced by InnoDB, so an INSERT of a BLOB or TEXT that was too large would simply result in an error.
Amazon RDS used to have a similar restriction years ago. They only supported a fixed size for innodb_log_file_size, I recall it was 128MB. It was not configurable.
I was at an AWS event years ago in San Francisco, and I found an opportunity to talk to the Amazon RDS Product Manager in the hall between sessions. I gave him feedback that leaving this setting at a relatively small value without the ability to increase it was too limiting. It meant that one could only insert BLOB/TEXT of 12.8MB or less.
I'm sure I was not the only customer to give him that feedback. A few months later, an update to RDS allowed that variable to be changed. But you must restart the MySQL instance to apply the change, just like if you run MySQL yourself.
I'm sure that Azure will discover the same thing, and get an earful of feedback from their customers.

Best way to optimize database performance in mysql (mariadb) with buffer size and persistence connection configurations

I have;
a CRUD heavily loaded application in PHP 7.3 which uses CodeIgniter framework.
only 2 users access to application.
The DB is mariadb 10.2 and has 10 tables.In generally, stored INT and engine default is InnoDB but in a table, I store a "mediumtext" column.
application managed by cronjobs (10 different jobs for every minute)
a job average proceed is 100-200 CRUD from DB. (Totally ~ 1k-2k CRUD works in a minute with 10 tables)
Tested;
Persistent Connection in MySQL
I faced an issue maximum connection exceed, so I noticed the Code Igniter do not close connection if you do not set pconnect to config to true in database.php. So, simplified that, it uses allow persistent connection if you set it true. So, I want to fix that issue and I find a solution that I need to set it false and it will close all connections automatically.
I changed my configuration to disallow Persistent connections.
After I update persistent connection disabled. My app started to run properly and after 1 hour later, it crashed again because of a couple of errors that showed below and I fixed those errors with setting max_allow_package to maximum value in my.cnf for mariadb.
Warning --> Error while sending QUERY packet. PID=2434
Query error: MySQL server has gone away
I noticed the DB needs to be tuning. The database size is 1GB+. I have a lot of CRUD jobs scheduled for every minute. So, I changed to buffer size to 1GB and innodb engine pool size to %25 of it. I get used to MySQL Tuner and I figure out those variables with that.
Finally, I am still getting query package errors.
Packets out of order. Expected 0 received 1. Packet size=23
My server has 8GB ram (%25 used), 4 core x 2ghz (%10 used)
I couldn't decide which configuration is the best option for now. I couldn't increase RAM, also %25 used of ram because a key buffer size is 1GB and it could get full use of ram instant jobs.
Can I;
fix the DB errors,
increase average completed CRUD process
8GB ram --> innodb_buffer_pool_size = 5G.
200 qpm --> no problem. (200qps might be a challenge).
10 tables; 2 users --> not an issue.
persistent connections --> frill; not required.
key_buffer_size = 1G? --> Why? You should not be using MyISAM. Change to 30M.
max_allow_package --> What's that? Perhaps a typo for max_allow_packet? Don't set that to more than 1% of RAM.
Packets out of order --> sounds like a network glitch, not a database error.
MEDIUMINT --> one byte smaller than INT, so it is a small benefit when applicable.

How to increase MySQL buffer length?

One of our WordPress plugins require that we increase MySQL buffer length. I cannot find anything on Stackoverflow on clearly how to do this. We are running a VPS with CentOS 7. Any idea how we can increase this value?
As of MySQL 5.6.2, the innodb_change_buffer_max_size configuration
option allows you to configure the maximum size of the change buffer
as a percentage of the total size of the buffer pool. By default,
innodb_change_buffer_max_size is set to 25. The maximum setting is 50.
You might consider increasing innodb_change_buffer_max_size on a MySQL
server with heavy insert, update, and delete activity, where change
buffer merging does not keep pace with new change buffer entries,
causing the change buffer to reach its maximum size limit.
You might consider decreasing innodb_change_buffer_max_size on a MySQL
server with static data used for reporting, or if the change buffer
consumes too much of the memory space that is shared with the buffer
pool, causing pages to age out of the buffer pool sooner than desired.
Test different settings with a representative workload to determine an
optimal configuration. The innodb_change_buffer_max_size setting is
dynamic, which allows you modify the setting without restarting the
server.
you should read this it might help.
https://dev.mysql.com/doc/refman/5.7/en/innodb-change-buffer-maximum-size.html
You need to change the buffer size in server configuration. Refer steps at https://dev.mysql.com/doc/refman/5.7/en/innodb-change-buffer-maximum-size.html and https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_change_buffer_max_size

Understanding DB level caching in RAM for POSTGRESS and MYSQL

Imagine we have a MYSQL DB that's data size is 500 MB.
If I will set the innodb_buffer_pool_size at 500MB (or more), is it correct to think that all the data will be cached in RAM, and my queries won't touch disk?
Is effective_cache_size in POSTGRESS is the same as MYSQL's buffer_pool and it also can help avoid reading from disc?
I believe you are on the right track in regards to MySQL innoDB tables. But you must remember that when measuring the size of a database, there are two components: data length and index length.
MySQL database size.
You also have no control over which databases are loaded into memory. If you want to guarantee a particular DB is loaded, then you must make sure the buffer pool is large enough to hold all of them, with some room to spare just in case.
MySQL status variables can then be used to see how the buffer pool is functioning.
I also highly recommend you use the buffer pool load/save variables so that the buffer pool is saved on shutdown and reloaded on startup of the MySQL server. Those variables are available from version 5.6 and up, I believe.
Also, check this out in regards to sizing your buffer pool.
Is "effective_cache_size", a parameter to indicate the planner as to what OS is actually doing ?
http://www.cybertec.at/2013/11/effective_cache_size-better-set-it-right/
and for caching the tables, do we not need to configure "shared_buffers" ?
And with regards to MySQL, yes the "innodb_buffer_pool" size will cache the data for Innodb tables and preventing disc reads. Make sure its configured adequate to hold all the data in memory.

how to quick insert big collection to mysql use waterline

I wanna insert 10000+ records to mysql, If I process 30000+, application is crash
RangeError: Maximum call stack size exceeded
I use sails 0.10.5,sails-mysql 0.10.8, Model.create([....]).exec();
how to reslove is?