Time to first byte 6secs Openshift - openshift

I am using this template (https://github.com/openshift-evangelists/php-quickstart) on a start node west 2 on Openshift. I assigned 256MB on the php container and 256MB on the MySQL container.
I have no data on MySQL and with really bare bone php scripts the time to first byte (TTFB) is 6 seconds. I don't get any delays to other websites like this and definitely not on my old Openshift 2 installation.
Is this normal? Is Openshift 3 slower like that for the free (starter) services? Or is there something I am doing wrong? Any way I can troubleshoot this further?

256MB is too little for MySQL, it usually wants to use more than that from what I have seen and why the default was set to 512MB. Unless that is, that it is dynamically working out how much memory it has available and tries to gobble as much as possible.
The behaviour with slow responses is a known issue which has been affecting a number of the Online Starter environments on and off. The issue is still being investigated and a solution implemented.
You can verify actual response times by getting inside the container using oc rsh or the web console and using curl against $HOSTNAME:8080.

Related

Application Slow after a couple of uses

I created an application that works perfect in my computer but when I uploaded it to start server tests it becomes very slow, specially after a couple of uses (the first minutes work fine)...It even becomes unresponsive, as I move through a treetable a form should be updated from the database but stops working after a while...
I'm using an Amazon EC2 Linux server and a MySQL database...I checked if the connections to the database is what failed, but I'm using no more than 7 out of 150 max connections to the database.
Is this a common problem?
Any ideas on how to solve this?
Thanks!!!
Note: This is a copy of an internal vaadin forum thread: https://vaadin.com/forum#!/thread/4816326 ...Hope is not against the forum rules to do this...
It sounds like you may have a memory leak in your application somewhere that your computer is able to sustain, but your server is not. I would suggest trying some load testing on another machine and see what actions are causing it to spin out.
You can have a look at this SO answer to see how to do that:
https://stackoverflow.com/a/46227692/460802

Nginx Vs Apache to solve load isseu on website

So Have a web application that has 10-12 pages with many POST/ GET DB Calls. We usually have a apache crash/other problem when site traffic results to 1000 or so (concurrent users) which is very small number, we have updated server with good RAM and resources. When our system admin guy do load testing on blitz and other custom script and is suggesting to move away from Apache. Some things does not make sense to me. Like Apache is not too bad to handle few thousand of concurrent users considering we have cloudflare for caching. Here is what he suggested:
replacement of Apache+mod_fcgi with Nginx+php-fpm which can make the server handle much more users, and then test it.
or
2. For testing: Need 10-20 servers to run a scenario from. Basically, what is needed is a more complex blitz.io analogue. create one server, which takes all those hours, then just clone it in the cloud and pay for about 1 hour of testing multiplied by the number of servers needed.
Once again there are many DB calls anf HT access. ALso what makes Nginx better than apache in this case?
I would check this comparison first. Basically, nginx is event based, so it's able to handle more requests concurrently. However, as the MySQL DB seems to be the choke point here, it's very possible that nginx wouldn't solve all your problems. Perhaps moving to a NoSQL kind of database, that's better at scaling horizontally, would help (if that's feasible).

How to find out what is causing a slow down of the application?

This is not the typical question, but I'm out of ideas and don't know where else to go. If there are better places to ask this, just point me there in the comments. Thanks.
Situation
We have this web application that uses Zend Framework, so runs in PHP on an Apache web server. We use MySQL for data storage and memcached for object caching.
The application has a very unique usage and load pattern. It is a mobile web application where every full hour a cronjob looks through the database for users that have some information waiting or action to do and sends this information to a (external) notification server, that pushes these notifications to them. After the users get these notifications, the go to the app and use it, mostly for a very short time. An hour later, same thing happens.
Problem
In the last few weeks usage of the application really started to grow. In the last few days we encountered very high load and doubling of application response times during and after the sending of these notifications (so basically every hour). The server doesn't crash or stop responding to requests, it just gets slower and slower and often takes 20 minutes to recover - until the same thing starts again at the full hour.
We have extensive monitoring in place (New Relic, collectd) but I can't figure out what's wrong; I can't find the bottlekneck. That's where you come in:
Can you help me figure out what's wrong and maybe how to fix it?
Additional information
The server is a 16 core Intel Xeon (8 cores with hyperthreading, I think) and 12GB RAM running Ubuntu 10.04 (Linux 3.2.4-20120307 x86_64). Apache is 2.2.x and PHP is Version 5.3.2-1ubuntu4.11.
If any configuration information would help analyze the problem, just comment and I will add it.
Graphs
info
phpinfo()
apc status
memcache status
collectd
Processes
CPU
Apache
Load
MySQL
Vmem
Disk
New Relic
Application performance
Server overview
Processes
Network
Disks
(Sorry the graphs are gifs and not the same time period, but I think the most important info is in there)
The problem is almost certainly MySQL based. If you look at the final graph mysql/mysql_threads you can see the number of threads hits 200 (which I assume is your setting for max_connections) at 20:00. Once the max_connections has been hit things do tend to take a while to recover.
Using mtop to monitor MySQL just before the hour will really help you figure out what is going on but if you cannot install this you could just using SHOW PROCESSLIST;. You will need to establish your connection to mysql before the problem hits. You will probably see lots of processes queued with only 1 process currently executing. This will be the most likely culprit.
Having identified the query causing the problems you can attack your code. Without understanding how your application is actually working my best guess would be that using an explicit transaction around the problem query(ies) will probably solve the problem.
Good luck!

Service deployed on Tomcat crashing under heavy load

I'm having trouble with a web service deployed on Tomcat. During peak traffic times the server is becoming non response and forces me to restart the entire server in order to get it working again.
First of all, I'm pretty new to all this. I built the server myself using various guides and blogs. Everything has been working great, but due to the larger load of traffic, I'm now getting out of my league a little. So, I need clear instructions on what to do or to be pointed towards exactly what I need to read up on.
I'm currently monitoring the service using JavaMelody, so I can see the spikes occurring, but I am unaware how to get more detailed information than this as to possible causes/solutions.
The server itself is quad core with 16gb ram, so the issue doesn't lie there, more likely in the fact I need to properly configure Tomcat to be able to use this (or setup a cluster...?)
JavaMelody shows the service crashing when the cpu usage only gets to about 20%, and about 300 hits a minute. Is there any max connection limits of memory settings that I should be configuring?
I also only have a single instance of the service deployed. I understand I can simply rename the war file and Tomcat deploys a second instance. Will doing this help?
Each request also opens (and immediately closes) a connection to mySQL to retrieve data, I probably need to be sure it's not getting throttled there too.
Sorry this is so long winded and has multiple questions. I can give more information as needed, I am just not certain what needs to be given at this time!
The server has 16Gs of ram but how much memory do you have dedicated to tomcat, -Xms and -Xmx?

Magento: server requirements for a quite big shop to run smoothly

I'm working on a quite big magento: it will have 50 different shops (1 magento install, 1 admin to rule them all) for start, this number is expected to raise in the future, and a catalog of more than 1k products. This catalog will be shared by all shops.
I'm concerned about the server requirements I need for this to run smoothly. So far this is what I've found to get the most of it:
Caching: using magento's cache with APC, MySQL's querys
use FastCGI instead of mod_php
database clustering: I don't think it will be necesary for 1k products, what do you think?
using Zend Server
Are there other thing I can do in order to improve magento's performance? I'd like to know all I need from the beginning so I can find the right server.
thanks in advance.
Make sure also to use block-level caching for the sites. Beyond this, one of the suggestions that I've seen implemented is to change dynamic blocks (such as blocks that grab product data dynamically) over to statically defined HTML if they don't change often.
Once you've coded a site, tune it using YSlow and Firebug to make sure that as many files as possible are cached, and that the page size is minimized. Minimizing the number of HTTP requests to Apache will increase the capacity of your server.
Finally, enable the flat catalog and flat category functions in Magento. This will force Magento to use fewer joins when retrieving catalog data, so your database load will go down and speed will increase considerably.
Hope that helps!
Thanks,
Joe
In testing, I noticed amazing improvements using an Amazon instance running ubuntu php-fpm REST and nginx. The only reason I didn't go there with our recent Magento upgrade is the host I'm on still works ok and I really don't want to be sysadmin for my site again.
Also, did you know there is http://magento.stackexchange.com ? :D