Cloudflare, Load balancer, and Yii 2 - yii2

So we’re having an issue with using CloudFlare, a load balancer, and Yii 2. It looks like Yii 2 uses an ip-based session persistence, but with CloudFlare + the load balancer, it keeps returning different IPs so users don’t stay logged in. Is there any work around to this or has anyone seen this issue and fixed it?

If you are using Cloudflare Load Balancer, you can enable Session Affinity which is a cookie-based persistence.

It looks like Yii 2 uses an ip-based session persistence
I suggest, you look into that once more. I'm 99.9% sure that yii2 does not do that. Otherwise using yii2 sites on mobile would be a PITA. Phones are always changing IPs because of switching WIFI and getting new data connections.

Related

How to share session across server in nodejs?

I want to create a stateless server, so that if any server goes down the loadbalancer can redirect the request to other servers. But if the session is created on one server and it goes down then how to persist it.I am using mysqlstore to persist my session in the database, but for each server, it creates a new record in database thus session id is not shared across different servers. So, need a mechanism to making server stateless.
I'm guessing you're using express-session because it wasn't otherwise indicated.
You're on the right track with the mysqlstore. The way to get around Node's single-threadedness here is to ditch express-session and instead encrypt the session data and put it into a client cookie. Then you can decrypt the session data on a GET request and validate it in your database using an isolate key in the cookie (or create a new session/cookie pair if none exist).
The most popular Node.js middleware for this is cookie-session. Great documentation there as well.
https://github.com/expressjs/cookie-session
As a side note, since it sounds like you're at a pretty scalable place right now with multiple servers, it's worth ditching express-session anyways. express-session uses MemoryStore, which is has a known issue with memory leaks. Fine to use for smaller projects, but probably should be reconsidered for larger ones.

Ajax load from LAN's mysql using chrome app

I am trying to Ajax load from LAN's mysql using chrome app.
I am proposing Ajax because I need chrome app to load up any update in the SQL instantaneously.
Since this app is only used in LAN network, I presume there is no need to maintain a web server (aka running Apache). Can anyone provide some hints as this answer I found on the forum does not help me (an absolute newbie) too much.
https://developer.chrome.com/extensions/xhr
Thank you.
YY
Since this app is only used in LAN network, I presume there is no need to maintain a web server (aka running Apache).
AJAX refers to making a HTTP request to.. something.
Something that can answer HTTP requests is called a web server.
So, you do need some sort of web server. It may be a component of MySQL server, but it's still a web server.
That said, it doesn't look like MySQL has a supported HTTP interface. There is an experimental HTTP Plugin that provides REST API, but it's experimental. Therefore, you would need a separate server application that does what you need.
That said,
I am proposing Ajax because I need chrome app to load up any update in the SQL instantaneously.
AJAX is not a magic bullet. It works well for requesting data, but it is not adapted to receiving updates initiated by the server you're talking to. It's a request-response cycle, and while there are some techniques to use it to push data they are hacks.
WebSockets evolved to cover the bidirectional, persistent communication needs. However, this again would require a web server to sit as a proxy between your DB and your app - this time, WebSockets-capable.
That said, building a Chrome App allows you to connect to a database directly - since Chrome Apps are capable of using chrome.sockets API. You would need a JavaScript library specifically adapted to the task, but those probably exist.
That said, and noting that I'm not an expert on databases, but..
Databases are not designed to notify you about updates. You need to poll them to see if the data has changed. You will not get it instantaneously no matter what interface you use. You'll need to periodically monitor it for changes.
Considering this, depending on what you're trying to ultimately do you may be choosing a wrong instrument.
There's a lot of "buts" here, and it seems like a complex task. You should re-evaluate your readiness as an "absolute newbie" to undertake it.

AWS Elastic Load Balancing: Seeing extremely long initial connection time

For a couple of days, we often see an extremely long initial connection time (15s - 1.3 minutes) to our ELBs when making any request via ssl.
Oddly, I was only able to observe this in Google Chrome (not Safari nor Firefox nor curl).
It does not occur every single request, but around 50% of requests. It occurs with the first request (OPTIONS-call).
Our setup is the following:
Cross-Zone ELB that connects to a node.js backend (currently in 2 AZs in eu-west-1). All instances are healthy and once the request comes through, it is processed normally. Currently, there is basically no load on the system. Cloudwatch for ELB does not report any backend connection errors, neither a SurgeQueue (value 0) nor a spillover count. The ELB metrics show a low latency (< 100 ms).
We have Route53 configured to route to the ELB (we don't see any dns trouble, see attached screenshot).
We have different REST-APIs that all have this setup. It occurs to all of the ELBs (each of them is connecting to an indipendent node.js backend). All of these ELBs are set up the same way via our cloudformation template.
The ELBs also do our SSL-termination.
What could lead to such a behavior? Is it possible that the ELBs are not configured properly? And why could it only appear on Google Chrome?
I think it is a possible ELB misconfiguration. I had the same problem when I put private subnets to ELB. Fixed it by changing private subnets to public. See https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-manage-subnets.html
Just to follow up on #Nikita Ogurtsov's excellent answer; I had the same problem except that it was just one of my subnets that happened to be private and the rest public.
Even if you think your subnets are public, I recommend you double check the route tables to ensure that they all have a Gateway.
You can use a single Route Table that has a Gateway for all your LB subnets if this make sense
VPC/Subnets/(select subnet)/Route Table/Edit
For me the issue was that I had an unused "Availability Zone" in my Classic Load Balancer. Once I removed the unhealthy and unused Availability Zone the consistent 20 or 21 second delay in "Initial Connection" dropped to under 50ms.
Note: You may need to give it time to update. I had my DNS TTL set to 60 seconds so I would see the fix within a minute of removing the unused Availability Zone.
This can be a problem with the elb of amazon. The elb scale the number of instances with the number of request.
You should see some pick of requests at those times.
Amazon adds some instances in order to fit the load.
the instances are reachable during the launch process so your clients get those timeout. it's totally randomness so you should :
ping the elb in order to get all the ip used
use mtr on all ip found
Keep an eye on CloudWatch
Find some clues
Solution If you're DNS is configured to hit directly on the ELB -> you should reduce the TTL of the association (IP,DNS). The IP can change at any time with the ELB so you can have serious damage on your traffic.
The client keep Some IP from the ELB in cache so you can have those can of trouble.
Scaling Elastic Load Balancers
Once you create an elastic load balancer, you must configure it to accept incoming traffic and route requests to your EC2 instances. These configuration parameters are stored by the controller, and the controller ensures that all of the load balancers are operating with the correct configuration. The controller will also monitor the load balancers and manage the capacity that is used to handle the client requests. It increases capacity by utilizing either larger resources (resources with higher performance characteristics) or more individual resources. The Elastic Load Balancing service will update the Domain Name System (DNS) record of the load balancer when it scales so that the new resources have their respective IP addresses registered in DNS. The DNS record that is created includes a Time-to-Live (TTL) setting of 60 seconds, with the expectation that clients will re-lookup the DNS at least every 60 seconds. By default, Elastic Load Balancing will return multiple IP addresses when clients perform a DNS resolution, with the records being randomly ordered on each DNS resolution request. As the traffic profile changes, the controller service will scale the load balancers to handle more requests, scaling equally in all Availability Zones.
Best Practices ELB on AWS
ALB Loadbalancer need 2 Availability Zones. If you use a Privat/Public/Nat VPC setting, then must all public subnets have a connection to the Internet.
For me the issue was that the ALB was pointing to an Nginx instance, which had a misconfigured DNS resolver. This meant that Nginx tried to use the resolver, timed out, and then actually started working a bit later.
Not really super connected with Load Balancer itself, but maybe helps someone figure out the issue in their own setup.
Check a security group too. That was an issue in my case.
I see a similar problem in my Chrome logs (1.3m lag). It happens in an OPTIONS request, and from wireshark, I don't even see the request leaving the PC in the first place. Any suggestions as to what Chrome might be doing are welcome.
We have recently encountered chrome taking 1.3 mins to load pages but the cause was slightly different. Just popping it here incase it helps someone.
1.3 mins seems to be how long Chrome will wait when trying to connect to a specific IP. Our domain name has multiple IP addresses in the A record (similar to a CNAME setup) and one of those IP's belonged to a server that had crashed. So sometimes the browser would connect quickly because it used a valid IP and sometimes we would get the long wait as it tried to connect to the invalid IP, timed out, and then retried with a valid IP.
So it is worth checking that all the IP's listed when you dig your domain, are resolving correctly.

Test performance in Openshift and prevent get banned IP

I have an application hosted in openshift. Now I want figure out how many request can handle in order to check the speed and availability.
So my first attempt will be generate a multiple HTTP GET requests to my Rest Service(made in python and hosted in openshift).
My fear is can get my IP workplace banned regarding this looks like an attack.
In the other hand I see there are tools like New Relic or DataDog to check metrics, but I don't know if I can simulate http requests and then check the response times.
Openshift Response
I finally wrote to Openshift support and they told me I can simulate http requests without worries.
I recall the default behavior being that each gear can handle 16 concurrent connections, then auto-scaling would kick in and you would get a new gear. Therefore I would think it makes sense to start by testing that a gear works well with 16 users at once. If not, then you can change the scaling policy to what works best for you application.
BlazeMeter is a tool that could probably help with creating the connections. They mention 100,000 concurrent users on that main page so I don't think you have to worry about getting banned for this sort of test.

AS3 with mysql connection with sockets or PHP?

So, we want to move out from Air (Adobe stopping support and really bad implementation for the sqlite api, among other things).
I want to make 3 things:
Connect with a flash (not web) application to a local mysql database.
Connect with a falsh (not web) application to a remote mysql database.
Connect with a flash (web) application with a remote mysql database.
All of this can be done without any problem, however:
1 and 2 can be done (WITHOUT using a webserver) using for example this:
http://code.google.com/p/assql/
3 can be done using also the above one as far as I understand.
Question are:
if you can connect with socket wit mysql server, why use a web server (for example with php) to connect like a inter connectioN? why not connnect directly?
I have done this a lot of times, using AMFPHP for example, but wouldn't be faster going directly?
In the case of accessing local machine, it will be a more simple deploy application that only require the flash application + mysql server, not need to also instal a web server.
Is this assumption correct?
Thanks a lot in advance.
The necessity of separate layer of data access usually stems from the way people build applications, the layered architecture, the distribution of the workload etc. SQL server usually don't provide very robust API for user management, session management etc. so one would use an intermediate layer between the database and the client application so that that layer could handle the issues not related directly to storing the data. Security plays a significant role here too. There are other concerns as well, as, for example, some times you would like to close all access to the database for maintenance reasons, but if you don't have any intermediate layer to notify the user about your intention, you'd leave them wondering about whether your application is still alive. The data access layer can also do a lot of caching, actually saving your trips to the database, you would have to make from client (of course, the client can do that too, but ymmv).
However, in some simple cases, having an intermediate layer is an overhead. More yet, I'd say that if you can, do it without an intermediate layer - less code makes better programs, but all chances are for that you will find yourself needing that layer for one reason or another.
Because connecting remotely over the internet poses huge huge huge security problems. You should never deploy an application that connects over the internet to a database directly. That's why AIR and Flex doesn't have remote Mysql Drivers because they should never be used except for building development type tools. And, even if you did build a tool that could connect directly, any descent network admin is going to block access to the database from anywhere outside the DMZ and internal network.
First in order your your application to connect to the database the database port has to exposed to the world. That means I won't have to hack your application to get your data. I just need to hack your database, and I can cut you out of the problem entirely because you were stupid enough to leave your database port open to me.
Second most databases don't encrypt credentials or data traveling over the wire. While most databases support SSL connections most people don't turn it on because applications want super fast data access and they don't want to pay for SSL encryption overhead blah blah blah. Furthermore, most applications sit in the DMZ and their database is behind a firewall so between the server and the database is unlikely something could be eavesdropping on their conversation. However, if you connected directly from an AIR app to the database it would be very easy to insert myself in the middle and watch the traffic coming out of your database because your not using SSL.
There are a whole host of problems doing what you are suggesting around privacy and data integrity that you can't guarantee by allowing a RIA direct access to the database its using.
Then there are some smaller nagging issues like if you want to do modern features like publishing reports to a central server so users don't have to install your software to see them, sending out email, social features, web service integration, cloud storage, collaboration or real time messaging etc you don't get if you don't use a web application. Middleware also gives you control over your database so you can pool connections to handle larger load. Using a web application brings more to the table than just security.