Error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 77 bytes)
File: vendor/twig/twig/lib/Twig/NodeVisitor/SafeAnalysis.php
Line: 47
I've just uploaded a site to a remote server(Ubuntu/Apache), and I'm getting the above error when the backend pages are loading. (see screenshot). Frontend is working fine.
Also, many of the backend pages just show a white screen (config pages, content pages).
Any idea what's happening here?
In your php.ini file, adjust memory_limit, e.g.:
memory_limit = 64M
Related
Internal Server Error
The server encountered an internal error or mis configuration and was unable to complete your request.
Please contact the server administrator, and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
You can increase memory limit in 3 ways,
Edit your wp-config.php file
define('WP_MEMORY_LIMIT', '256M'); // if this is not in your wp-config file.
Edit your PHP.ini file
If you have access to your PHP.ini file, change the line in PHP.ini
memory_limit = 256M ; Maximum amount of memory a script may consume (64MB) //If your line shows 64M try 256M:
Edit your .htaccess file
If you don’t have access to PHP.ini try adding this to an .htaccess file:
php_value memory_limit 256M
Hope this will help you.
I am new to Amazon EC2, recently I found mysql is not working properly, continuously crashing.
I think issue is less space or something..
Here is the some of the outputs:
free -k
total used free shared buffers cached
Mem: 1020536 254744 765792 0 54028 83748
-/+ buffers/cache: 116968 903568
Swap: 82628 0 82628
swapon -s
Filename Type Size Used Priority
/swapfile file 82628 0 -1
I am not able to start the mysql.
Can someone please suggest me the solutions?
Check the InnoDB cache size in my.cnf file. If it is more than the size of your system's memory you found your problem. Try to reduce that and try to start MySQL.
I am trying to run Hadoop balancer command as follows:
hadoop balancer -threshold 1
But I am getting several WARN messages as
Failed to move blk_1073742036_1212 with size=134217728 from 192.168.30.4:50010 to 192.168.30.2:50010 through 192.168.30.4:50010: block move is failed: Not able to receive block 1073742036 from /192.168.10.3:53115 because threads quota is exceeded.
And at the end...
No block has been moved for 5 iterations. Exiting...
Balancing took 4.092883333333333 minutes
I set ulimit values as follows:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 2065455
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 65535
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
But still I am getting the same error.
Could someone please give me suggestions on this. Appreciate your help.
Question was asked a long time ago, posting an answer for posterity's sake.
The Hadoop balancer has a bug where it prematurely exits iterations. This caused the balancer to be very slow. This was fixed in HDFS-6621 and officially released as part of Apache Hadoop 2.6.0. Since this is a bug in the Balancer itself, it is possible to run an updated version of the Balancer without upgrading your cluster.
Datanodes will limit the number of threads used for balancing so as to not eat up all the resources of the cluster/datanode. This is what causes the WARN statement you're seeing. By default the number of threads is 5. This was not configurable prior to Apache Hadoop 2.5.0. HDFS-6595 added this proeprty dfs.datanode.balance.max.concurrent.moves to allow you to control the number of threads used for balancing. Since this is a datanode side property, this will require an upgrade to your cluster if you want to use this setting.
If you're using a distribution packaged by a vendor (e.g. Hortonworks, Cloudera, etc), the mentioned fixes may have been back-patched to an earlier version. Check your vendors release notes to find out.
This is a duplicated post since I didn't get any help on askubuntu.com.
I have a 1TB external hard drive that I recently formatted to NTFS. It was mounting on my Ubuntu 11.10 fine until just now. I didn't make any changes to affect my OS or my exhdd.
The error that I get is:
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0).
Failed to mount '/dev/sdb2': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
I did read this and this. But neither helped.
I tried installing ntfsfix but no such package exists anymore.
I have never used this HDD on a windows machine. If I need to use an other machine to do stuff to fix this, I have access to a mac.
Any advice?
This is my sudo fdisk -l output:
What in the world is GPT? I didn't do that. It used to be NTFS.
Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000586fb
Device Boot Start End Blocks Id System
/dev/sda1 * 2148 961320312 480659082+ 83 Linux
/dev/sda2 961320313 976773167 7726427+ 5 Extended
/dev/sda5 961320314 976773167 7726427 83 Linux
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xcfd88605
Device Boot Start End Blocks Id System
/dev/sdb1 1 1953525167 976762583+ ee GPT
This is the thing that worked:
I first needed to get ntfs-3g (sudo apt-get install ntfs-3g)
Run sudo fdisk -l to figure out where the mount point is. Mine was /dev/sdb1
I ran ntfsfix -b /dev/sdb1 and that fixed the problem.
Error mounting: mount exited with exit code 13: $MFTMirr does not match $MFT (record 0). Failed to mount '/dev/sda1': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into Windows twice. The usage of the /f parameter is very important! If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper/ directory, (e.g. /dev/mapper/nvidia_eahaabcc1).
Please see the 'dmraid' documentation for more details.
Solution :-
sudo fdisk -l
sudo ntfsfix /dev/select_disk_name
To find Disk name:
Go dashboard -> Disk utility -> Click disk -> then show Device /Dev/***
I get this message,
Request Entity Too Large
The requested resource
/index.php
does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit.
I set
php_value post_max_size 50M
php_value upload_max_filesize 50M
in .htaccess but not helped
How to overcome this?
Thanks
After you are over the raising of PHP's memory_limit, post_max_size and upload_max_filesize, I would like to recommend you some articles related to the topic, maybe one of them solves the problem.
I found this post on Server Fault:
https://serverfault.com/questions/79741/php-apache-post-limit/79745#79745
sybreon suggests to double-check the Content-Length, and - citing - "ensure that you are directly connecting to Apache and not through either a proxy or a reverse-proxy. Some reverse-proxies place a cap on the maximum size of a request as a sort of security measure. So, you may want to check that as well as your Apache logs to ensure that nothing else is going on."
sybreon also posted this link: Apache 413 error problems.
The following is only applicable if you have mod_ssl module turned on in Apache. (Otherwise this setting can cause a server crash.)
Citing the article:
"I was using Apache SSL client certificates, which have a limit of 128K, and if re-negotiation has to happen, a larger POST will fail.
This Bugzilla posting had the clues - You have to set the following as DEFAULTS for your SSL server, not just the directory.
SSLVerifyClient require
Otherwise it forces a renegotiation of some sort, and fails with a 413 error."
The previous article also mentioned the LimitRequestBody directive.
A guy says here that the appropriate setting of this directive solved his problem..
I hope one of these settings solves this problem!
The only thing that would work for me was to tune up the SSL Buffer Size. You can set this by...
<Directory /my/blah/blah>
...
# Set this to something big...
SSLRenegBufferSize 10486000
...
</Directory>
...and then just restart Apache for the change to take effect. (Found this at: http://forum.joomla.org/viewtopic.php?p=2085574)
You can also use "Location /" to simply apply the setting to a whole VirtualHost:
<VirtualHost *:443>
# ...
<Location />
SSLRenegBufferSize 101048600
</Location>
# ...
</VirtualHost>
My server is Apache. It was mod_security module which was preventing post of large data approximately 171 KB.
I did below configurations in mod_security.conf
SecRequestBodyNoFilesLimit 10486000
SecRequestBodyInMemoryLimit 10486000
If max_post_upload and max_file_upload in PHP has been set,
and there is a setting in Apache2.conf or ModSec config files of LimitRequestBody set high enough
then possibly a .htaccess file will work.
Go to the directory with the upload php file in it ( the file or page throwing the error.)
2 . Make or edit .htaccess
3 . Edit or create a line with
LimitRequestBody 20971520 in it.
Save the .htaccess. Set permissions. ( 644 and apache owner)
Possibly restart apache.
Tada . Hopefully fixed.
This setting sets that limit for this folder only - which is one way to avoid a global setting in php and apache which makes you open to large packet / load DOS attacks.
LimitRequestBody 0 gives you unlimited uploads.
I was struggling with this 413 - Request entity too large problem for last day or so, as I was trying to upload farely large (in MBs) images to the server.
My setup is apache (227) proxying requests to jboss eap (6.4.20) server for accessing rest endpoints.
2 Things worked for me.
Make SSLVerifyClient required at the virtual host level. This means all the resources need a valid client cert presented to be served. This was not an option for me as all the resources except /api should NOT be mutual auth protected. So, while it worked, this was not an option for me.
I removed the global level SSLVerifyClient required and kept it 'optional'. I re enabled required option only on <Location /api>...</Location>. Trick was to have the SSL renegotiation happen only after a certain threshold is reached - which would be our desired upload file size.
So, finally it turned out that I had to enable 'SSLRenegBufferSize' setting on a specific LocationMatch as follows:
<LocationMatch ^/api/v1/path/(.*)/to/(.*)/resource/endpoint$>
SSLRenegBufferSize 5242880 #allow upto 5MB for files to come through
</LocationMatch>
(.*) in the case above represents my path params in the endpoint. Hope this helps.
After raising of PHP's memory_limit, post_max_size and upload_max_filesize in php.ini, I still had the problem.
What was also needed was the following in apache2.conf:
LimitRequestBody 1000000000
That's for a max size of 1GB.
The docs say that 0 is the default, which means unlimited. However, until I set the directive, I couldn't upload large files.
Don't forget to restart apache2.