Is CoovaChilli suitable for large number of users? - coovachilli

I recently did a Freeradius + CoovaChilli solution to a small WISP. Recently, a larger ISP has approached me, they want me to deploy a similar setup.
My question is the number of clients CoovaChilli can handle simultaneously. The ISP talks of 70 000 users connected at a go.

yes using vlan you can handle simultaneously
Here is Documentation:https://www.scribd.com/document/370649998/How-to-Configure-CoovaChilli-to-Support-VLAN

Related

MySQL Scaling on GCP

I created a instance (8 core) of MySQL on GCP. And a simple database in it. When I run a load of 40000+ concurrent users (1500 req/sec), the response times come out very high (10 seconds+). However I can see the hardware cpu utilization only at 15% or so.
What can I do to get the response time in msec?
Cheers!
Deepak
Imagine cramming 40000 shoppers in a grocery store. How many hours would it take for a shopper to buy just one carton of milk?
Seriously, there is a limit to how many connections can be made to any device. Database computers will top out at a few hundred. After that, latency will suffer severely as all the connections are waiting for their turn at various shared resources.
Another approach
Let's say these are achievable:
10ms to connect, fetch info for a page, and disconnect.
1500 pages built per second. (By the way, make sure the web server can achieve this.)
15 concurrent connections, each running for 10 ms. That equals 1500 pages per second.
1500 pages per second = 90000 pages per minute.
So, let's specify "40000 pages delivered to different (or same) users in one minute". I suggest that will be easy. And it won't require much more than 15 concurrent users. (Traffic is never smooth [except in a benchmark], so 50 concurrent connections may happen.)
[thousands of database servers is] where i would like to go eventually...however i need to solve a basic problem of mine which I have posted above!
Right, you have to expand the number of database servers now if you are serving 40,000 concurrent queries. Not eventually.
But let's be clear about what comprises concurrent users. Here's an example:
mysql> show global status like 'threads%';
+-------------------+-------+
| Variable_name | Value |
+-------------------+-------+
| Threads_connected | 1266 |
| Threads_running | 9 |
+-------------------+-------+
I've analyzed high-scale production sites for dozens of internet companies. It's typical to see hundreds or thousands of concurrent connections but few of these are executing an SQL query at any given moment. When a given thread is between executing queries, you can view it SHOW PROCESSLIST but it is only doing "Sleep".
This is fine, and it's normal.
I give the analogy to an ssh session: you may be connected to a shell on a linux server, but if you're doing nothing, just sitting at a shell prompt, you aren't taxing the server resources much. You could have hundreds of users connected with ssh to the same server at once. But if they all begin running applications at the same time, you're in trouble. The server might not handle that load well. At least, all of the users will experience slow performance.
It's the same with a MySQL database. If you need a server that can support 40,000 Threads_running, then you need to spread that load over many MySQL servers. There isn't any single server that exists today that can handle that.
But you might mean something different when you say 40,000 concurrent users. It might be that you have 40,000 users who are looking at some page on your website at the same time. But that's not resulting in continuous SQL queries in 40,000 database sessions all at the same time. Each person spends some time reading the web page they just loaded, and scrolling up and down, and perhaps typing into a form. While they are doing that, the website is waiting for their next request, and the web server and database server is not doing any work for that user while it's waiting. It can do some work for other users.
In this way, a single database server can support 40,000 (or more) users who are by some definition using the site, even though only a handful are invoking any code to run SQL queries at any given moment.
This is normal and most websites can handle that traffic with no problems.
If that's the reality of your application, and you still have problems scaling it, then you might have inefficient application code or unoptimized SQL queries. That is, the website could serve the requests easily if you wrote the code to be more efficient.
Inefficient code cannot be fixed by changing your server. The cost of inefficient code scales up faster than you can hope to handle it by upgrading the server. So you must solve performance problems by writing better code.
This is the point of an old tweet of mine:
The subject of scalable internet architecture is very complex. You need to do a lot of study and a lot of testing to grow a website and make it scalable.
You can start by reading. My favorite is Theo Schlossnagle's book Scalable Internet Architectures. Here is a video of Theo speaking about the same subject: https://www.youtube.com/watch?v=2WuT2rdLK5A
The book is from quite a few years ago. Perhaps the scale websites need to support is greater than it was back then, but the methods of achieving scalability are the same today.
Test
Identify bottlenecks
Rearchitect your web app code to relieve those bottlenecks
Test again

Max no of connections using web sockets

I am developing a web application using web-sockets which needs real time data.
The number of clients using the web application will be over 100 000.
Server side web socket coding is done in Java. Can a single web-socket server handle this amount of connections?
If not, how can I achieve this. I have to use web sockets only.
WebSocket servers, like any other TCP-based server, can open huge numbers of connections. They can be file-descriptor-based. You can find out the max (system-wide) FDs easily enough on Linux:
% cat /proc/sys/fs/file-max
165038
There are system-wide and there are kernel parameters for user limits (and shell-level things like "ulimit"). Btw, you'll need to edit /etc/sysctl.conf to increase your FD mods during a reboot.
And of course you can increase this number to whatever you want (with the proportional impact on kernel memory).
Or servers can do tricks to multiplex a single connection.
But the real question is, what is the profile of the data that will flow over the connection? Will you have 100K users getting 1 64-byte message per day? Or are those 100K users getting 50 1K messages a second? Can the WebSocket server shard its connections over multiple NICs (ie, spread the I/O load)? Are the messages all encrypted and therefore need a lot of CPU? How easily can you cluster your WebSocket server so failover is easy for you and painless for your users? Is your server mission/business critical?... that is, can you afford to have 100K users disappear if a disaster occurs? There are many questions to consider when you thinking about scalability of a WebSocket server.
In our labs, we can create millions of connections on a server (and many more in a cluster). In the real-world, there are other 'scale' factors to consider in a production deployment besides file descriptors. Hope this helps.
Full disclosure: I work for Kaazing, a WS vendor.
As FrankG explained above, the number of WebSocket connections is depended on the use case.
Here are two benchmarks using MigratoryData WebSocket Server for two very different use cases that also detail system configuration (let's note however that system configuration is only a detail and the high scalability is achieved by the architecture of the MigratoryData which has been designed for real-time websites with millions of users).
In one use case MigratoryData scaled up to 10 million concurrent connections (while delivering ~1 Gbps messaging):
https://mrotaru.wordpress.com/2016/01/20/migratorydata-makes-its-c10m-scalability-record-more-robust-with-zing-jvm-achieve-near-1-gbps-messaging-to-10-million-concurrent-users-with-only-15-milliseconds-consistent-latency/
In another use case MigratoryData scaled up to 192,000 (while delivering ~9 Gbps):
https://mrotaru.wordpress.com/2013/03/27/migratorydata-demonstrates-record-breaking-8x-higher-websocket-scalability-than-competition/
These numbers are achieved on a single instance of MigratoryData WebSocket Server. MigratoryData can be clustered so you can also scale horizontally to any number of subscribers in an effective way.
Full disclosure: I work for MigratoryData.

SQL Server back-end with MS Access 2007 front end. Number of users

I have a .mdb MS Access 2007 which is connected to a SQL Server backend. The tables are linked using a system DSN.
I need to do a stress test on the system and I would like to know the maximum number of users who can use the system at the same time.
The access .mdb file is done through the WTS.
Thanks for the help
The max number of users is going to depend on how well the application is written, and how good the network each user has.
You might have a great application, but if they are connecting to SQL server over a slow network, then the application will be slow with 2 users, and will be slow with 250 users.
If the network is good, and the application is well written to respect bandwidth requirements, then the application will likely run the SAME speed with 2, 10, 20 or 100 users.
And deepening on how large and powerful the SQL server box is? Then you can easy scale out to 500 users at the same time.
So this question is much difficult to answer. The network from the Access application to the SQL server is an important factor.
And some applications perform poorly with 5 users and SQL server, and thus such applications will perform even WORSE with 100, or 200 users.
So how well does the application work with 5 users, and then with say 25. If it written well, you likely not notice the difference. On the other hand if it slow with 1 user, then you downhill all the way from that point on as you add more users.
So it better run REALLY well with one user if you planning to scale out to many users.
So it certainly possible to have a 1000 users at the same time without much effort. As noted this depends on how well the application was designed with SQL in mind. So the quality of the work the developers done will be the LARGEST factor in how many users you can scale out to. As noted the capacity of the server and SQL will also determine the max number of users.
With a typical application that respects SQL server, then running 50 or 100 users should hardly break SQL server into a sweat is should be easy obtainable.
In fact for those 50 users, your HUGE resource HOG will be WTS.
Assuming you mean windows terminal services, then that setup requires HUGE resources, and far more then your SQL server will. This system will require much more attention and resources then SQL server. As noted, if the application runs rather well with 1-2 users, then usually such applications will run easy with 25. If the application runs slow with only 1 or 2 users, then you going to have scaling problems as you add more users.
At the end of the day, there are FAR FAR too many factors to give an answer without a case by case knowledge of the server involved, the network bandwidth, the capacity of the WTS, and MOST important how well the application was designed (this factor is #1).

Should I use a shared server for social networking app for my college?

I am developing an Android application for my college. There will be no more than 10000 users in total, and we can assume that no more than 500 concurrent users will be there at any given time.
They all will be posting their status, photos, making comments (except video sharing). I am using only MySQL as database (without memcache or some other technology) and PHP as web service.
I want to ask, will a shared server serve for my purpose? Because it will be a free app and the cost of dedicated server will be too high.
It's difficult to give specific answer.
Hosting companies very often offer possibility to test their hosting for some time (e.g. 14 days). I think you should use this free period and do performance tests. Then you will know wheter it's enough for your neeeds.

Design a SMTP server for High Volume Outbound Email

We are developing an application which will require to send around 30 outbound emails per second. We have a server running SMTP but this machine in cloud hosted and I do not have any idea what kind of configuration will I require to support such a load. I do not even know if this load is considered to be average or high. Do i need to do anything special for such a load. Do i need a dedicated quad core server for this kind of load or lets say just 1/10th CPU of a quad core server is good enough
Hm
what for?
30 emails per second is nothing. I wrote a server like 10 years ago hitting about 5000 per second (to one other server taking it down in the process - custoemr wanted as fast as possible, i delivered).
Get any little MTA and jst use it. No sense in writing something yourself for that low volume.
Unless you hit the server with a lot of stuff at once (loading it for transfer), a small VPS should be ok.
Seriously, 30 emails per second is what I sometimes send from my dialup account. THis is not even a visible volume for a decent message transfer agent. It is definitely NOT "high volume".
Going to echo TomTom on this one and say just to get one of the many services out there that will help you do this. It's probably far easier to utilize one of their services and not have to worry about reputation monitoring and all the fun stuff of SMTP servers than to create your own solution.
Let me know if you need help finding these services.
(Full Disclosure: I work for PostageApp.com, and we're rolling out a hosted SMTP service soon!)