Problens quotes redhat openshift - mysql

I try to deploy an application on OpenShift and I always get this message:
FailedCreate: checkapi-14 Error creating: pods "checkapi-14-" is forbidden:
[minimum cpu usage per Pod is 19m, but request is 12m., minimum memory
usage per Pod is 100Mi, but request is 67108864., minimum cpu usage per
Container is 19m, but request is 12m., minimum memory usage per Container
is 100Mi, but request is 64Mi.]
Could you tell me why please?

If you are using OpenShift Online, this suggests you have overridden the memory resources limit values and have made it too small. The minimum you can set memory to is 256MB.
The reason I am talking about memory rather than CPU is that in OpenShift Online, the CPU is calculated in proportion to memory, you can't control CPU yourself.

Related

JMeter inconsistent CommunicationsException: Communications link failure

com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet successfully received from the server was 976,464 milliseconds ago. The last packet sent successfully to the server was 974,674 milliseconds ago.
This error occurs in JMeter when I ran the following test plan that would send 15 MB files to AWS RDS.
LoadTestPlan
JDBC Connection Configuration:
Max Wait ms: 0 (indefinite wait)
Max connections: 0 (no limit)
ThreadGroup
No. of threads: 200
Ramp up seconds: 100
Loop Count: Indefinite
Scheduled to run for 3 hours
JDBC Request
LOAD DATA LOCAL INFILE statement
RDS Configuration
Engine 5.7.33
Max connections: 200
Innodb lock wait timeout: 6000
Max allowed packet: 64 MB
There were many solutions for this Communication Links Failure but for me, some requests are successful and for some I get this error. Thus I am starting to think it is the network problem but I am using high speed Ethernet of 74 Mbps speed. Even if it is the network problem there must be some parameter that when adjusted should allow connections from even poor network to be successful.
JMter version: 5.4
With regards to your statement:
Max connections: 0 (no limit)
I don't think it's true, as per BasicDataSource Configuration Parameters I would say it's rather 8 (see maxTotal parameter)
So it looks like you're using 200 concurrent threads and having 8 connections in pool, try increasing these max connections to be equal to the number of JMeter threads. Or if you're not testing the database directly you should rather mimic your application JDBC configuration.
I know that JMeter tries to set maximum pool size equal to initial pool size as it evidenced by this line, however the source code of BasicDataSource suggests setting a negative number for "no limit"
More information: The Real Secret to Building a Database Test Plan With JMeter

Openshift 3 free: mini SpringBoot app + Mysql immediately exceeds the quota limits?

When experimenting with Openshift v3 - I could create and deploy a very simple webapplication with Wildfly & postgres.
When trying to create a very simple SpringBoot application (as a WAR) with Mysql (with 1 table), the MySql volume storage immediately exceeds the quota. As a result the very simple application cannot run properly.
Error creating: pods "springbootmysql-8-" is forbidden: exceeded
quota: compute-resources, requested: limits.cpu=1,limits.memory=512Mi,
used: limits.cpu=2,limits.memory=1Gi, limited:
limits.cpu=2,limits.memory=1Gi 19 times in the last 11 minutes
Update: now I configured both pod's with 480Mi memory - the memory quota's are not exceeded.
I now get an error message stoping the build and deployment:
Error creating: pods "springbootmysql6-2-" is forbidden: exceeded
quota: compute-resources, requested:
limits.cpu=957m,limits.memory=490Mi, used:
limits.cpu=1914m,limits.memory=980Mi, limited:
limits.cpu=2,limits.memory=1Gi
On OpenShift Online Starter, if running both a database and frontend with both using 512MB each, you only have enough resources to use the Recreate deployment strategy. You will need to go into the deployment configuration for the front end and change the deployment strategy from Rolling to Recreate.
If after making the change it is still having the same issue, scale down the number of replicas of the front end to 0, and then back to 1. This will ensure that Kubernetes is not stuck in the prior state since it was still trying to deploy under the old settings. Things should then be okay.

Hadoop node manager does not satisfy minimum allocations

Hadoop node manager doesn't satisfy minimum allocations. I am getting the following error:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Received
SHUTDOWN signal from Resourcemanager, Registration of NodeManager
failed, Message from Resource Manager: NodeManager from
DP112.Mas.company.com doesn't satisfy minimum allocations, Sending
SHUTDOWN signal to the NodeManager.
RAM -6 GB in my physical box
And I am setting Single node setup for initial testing purpose
The "ResourceManager: NodeManager from *** doesn't satisfy minimum allocations" error is seen when node on which node manager is being started does not have enough resources w.r.t yarn.scheduler.minimum-allocation-vcores and yarn.scheduler.minimum-allocation-mb configurations.
Reduce values of yarn.scheduler.minimum-allocation-vcores and / or yarn.scheduler.minimum-allocation-mb then restart Yarn.

Exception in Glassfish

I face a very annoyed exception in glassfish which is
SEVERE: Exception in thread "RMI RenewClean-[192.168.1.2:8686]"
SEVERE: PermGen space
java.lang.OutOfMemoryError: PermGen space
my hardware resources r high, and when I open task manager and notice the resources, it's available,
this exception force me to restart my pc every 10 to 15 minutes :( what should I do?
You need to increase the amount of PermGen space using the -XX:MaxPermSize=256m flag.
See this related SO question
In order to set this up in Glassfish, use the following steps:
Connect to the admin interface of your Glassfish server (localhost:4848)
Move to Application Server > JVM Setting > JVM Options and check the global amount of memory allocated to your instance of Glassfish (should be something like -Xmx512m or more) and add one JVM Option with value:
-XX:MaxPermSize=256m
The amount of memory depends on the amount you need. Increase it if it keeps crashing, but reading the PermGen article may help in determining the right amount.
The permgem space is one of the most irritating errors in glassfish.
The permgem space is an error that appears when you use a lot of deploys or redeploys in the server, because the server reserves memory and never frees it. I recommend you to supervise the server with Apache JMeter to see the amount of memory (and if it is near the max, restart it before it crash).
To temporary fix it, you must include some variables in the server to improve his memmory consumption in the glassfish administrator like said amccormack.
I recommend you to use
-XX:PermSize=512m
-XX:MaxPermSize=512m
-XX:+CMSClassUnloadingEnabled
By the way, if the permgem space error appears, the server will not respond (even to asadmin stop-domain) . But you can easily restart if you kill the java process that runs glassfish and call asadmin start-domain. I think that it is quicker than restart all the server.

SSIS package execute - Out-of-memory issue!

While executing SSIS package, I got following errors,
The buffer manager failed a memory allocation call for 10484608 bytes, but was unable to swap out any buffers to relieve memory pressure. 20 buffers were considered and 20 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
[DTS.Pipeline] Error: Thread "SourceThread0" has exited with error code 0xC0047038.
[DTS.Pipeline] Error: The Data Flow task failed to create a buffer to call PrimeOutput for output "XML Source 1" (91) on component "GeneralCongfigurations" (98). This error usually occurs due to an out-of-memory condition.
It happen when trying insert data to SQL table from an XML file with 'Script Component'.
How to solve it?
The message tells you SSIS is using 20 buffers, 10Mb each - about 200Mb together.
It is not very big amount of memory, even on 1Gb machine you would not run out of memory.
It is likely other processes consume the rest of the memory - check the task manager. Often it is SQL Server who consumes all the memory - if you run SQL and SSIS on same machine, restrict the amount of memory SQL is allowed to consume (in SQL Server properties), leaving some memory for SSIS - I would recommend leaving at least 0.5Gb.
You should also look at all of the log messages. In particular, Lookup transforms can log a lot of information about how much memory they're using. You can get some very detailed logs about memory allocation.
I faced the same issue. My XML source is of around 2MB. when I start the package it started throwing this OutOfMomory warning. My server has around 8GB RAM. So it was not memory issues . Current machine SQL server service was taking close to 6GB space. I could check this from Task Manager. As my SSIS package destination was connecting to difference database I did not want the current server SQL service to be running. So the moment I stopped this SQL SERVER service, my SSIS package executed successfully.
#Chris Pickford
You want to use PerfMon Counters:
Memory\
Availabe Bytes/MBytes
Committed Byrtes
SQLServer:SSIS Pipeline\
Buffer Memory
Buffers in Use
Buffers Spooled
Also, if you've got logging on your package, look at event User:BufferSizeTuning