Hosting use sub-domains - subdomain

I have a general question on hosting. I have two separate websites that I created for two small businesses. Each site will be assigned to a sub-domain. Now I'm considering hosting with BlueHost. They stated that I can create as many sub-domains as I want. My question is would there be performance issues as each site will have a separate amount of users??? I'm not concern with SEO performance as this site is only to be used by businesses. My concern is as multiple users start using these sites how would this affect processing performance?
Thanks in advance
Regards,
Kevin

Since I am a long time customer of Bluehost, I can say it wonot give a huge performance issue,
but please keep in mind, whatever bluehost says you unlimit etc.., they have file count limit in shared hosting, if you reached the file count limit, they will warn you of closing your account or reduce the file limit,...

Related

What should be concern when deploy an large-scale application?

I tried to set up a micro-blog web site with simple function.
And in the future, I would set some API for the mobile app.
The main feature is simple. People can register, post blog , tag articles, and comments.
Currently, I am using laravel framework + Mysql + Apache and host on VPS.
(Hardwere spec is HD:160 GB,CPU:8core,RAM:8 GB.)
The database tables are basic,including user,comments,article,tags,and tags pivot table.
Everything works fine.
But
I had a little concern about scalability and performance.
Since I have no any expereince about scale a web site.
Could someone give me some key concepts of what should I concern if the users numbers increase to 10,000 ~100,000?
I am OK with change my host platform or even change the framework and database at the begining.
All I try to avoid is that the web site might be crash after deploy a period of time.
The update and transfer would be a disaster.Thanks
Look into scalable cloud hosting, such as digital ocean or amazon, where you can scale your capacity as you grow.
These companies allow you to start small with a "slice" of a server and as you grow you can grow into multiple servers. Load balancing is usually done on their end so all you need to do is focus on your application

Best way to store multiple type of data

I'm about to design a database for a project I'm working to. I need to store multiple type of data, like Videos , Photos , text , audio. I've to store them and through php I have to query them frequently. This project I'm working on is a Social Network and I need to connect users through notification and messages.
Here is the question : Is more helpfull to use NoSQL DB's to store data and for the notification system ( Like MongoDB and Redis ) or MySQL can help me as well with this kind of systems?
Sorry for my english , but technical stuff are so hard to explain for a english beginner like me. Thank you guys.
The problem with SQL techs, i.e. MySQL is that normally you have to place that binary text within a BLOB at which point you are already doing it wrong.
Another thing to consider is that file system access will always be faster than database access whether it is MongoDB or SQL, however, database storage does have some advantages. Eventually on your site (if it were to get slightly popular) you will find you need a CDN. These sorts of distribution networks can be costly however with something like MongoDB you can just spin up replicas of the data in other regions and have the binary data replicate as it is needed (maybe even TTL'd just like a CDN).
So this is one area to consider, that the file system is not most of the time, the right answer for a high load site like a social network. However even facebook themselves are not immune to having to serve directly from a file system, as they state ( https://blog.facebook.com/blog.php?post=2406207130 , considering this post is 5 years old but I doubt much has changed on this front):
We have also developed our own specialized web servers that are tuned to serve files with as few disk reads as possible. Even with thousands of hard drive spindles, I/O (input/output) is still a concern because our traffic is so high. Our squid caches help reduce the load, but squid isn't nearly fast or efficient enough for our purposes, so we're writing our own web accelerator too.
However they do have an extremely large infrastructure and most likely you should more like consider whether you use database storage or a CDN.
I would personally say you should probably do some research into content distribution networks and how other sites serve their images. You can find information all over Google. You can search specifically for Facebook, who until recently, were using Akamai for their CDN.
You can go either way. But storing binary data in a db is usually not the most efficient path. You are better off storing that in the filesystem and put the paths in the DB.

MySQL Server Runs out of Disk Space?

Our company's web application stores a ton of data points on thousands of visitors a day, and we are anticipating the hard disks will fill up soon. Our server can not support more hard drives, and we are not interested in little tricks to free up some space to buy us a few hours worth of space.
How can we solve this issue? The database is huge, over 200GB, and our website needs to be available, so I don't believe copying it and moving it to a new, larger server is a good option for us. Furthermore, what happens when THAT server runs out of disk space?
What do large scale web sites normally do to remedy this issue?
Thanks!
You may want to investigate separating into multiple database servers as "shards. You will likely have to add some logic to your application to know where to find a set of data and how to join queries with data that originates from multiple shards. There are third-party applications that can assist you with this process.

Magento: server requirements for a quite big shop to run smoothly

I'm working on a quite big magento: it will have 50 different shops (1 magento install, 1 admin to rule them all) for start, this number is expected to raise in the future, and a catalog of more than 1k products. This catalog will be shared by all shops.
I'm concerned about the server requirements I need for this to run smoothly. So far this is what I've found to get the most of it:
Caching: using magento's cache with APC, MySQL's querys
use FastCGI instead of mod_php
database clustering: I don't think it will be necesary for 1k products, what do you think?
using Zend Server
Are there other thing I can do in order to improve magento's performance? I'd like to know all I need from the beginning so I can find the right server.
thanks in advance.
Make sure also to use block-level caching for the sites. Beyond this, one of the suggestions that I've seen implemented is to change dynamic blocks (such as blocks that grab product data dynamically) over to statically defined HTML if they don't change often.
Once you've coded a site, tune it using YSlow and Firebug to make sure that as many files as possible are cached, and that the page size is minimized. Minimizing the number of HTTP requests to Apache will increase the capacity of your server.
Finally, enable the flat catalog and flat category functions in Magento. This will force Magento to use fewer joins when retrieving catalog data, so your database load will go down and speed will increase considerably.
Hope that helps!
Thanks,
Joe
In testing, I noticed amazing improvements using an Amazon instance running ubuntu php-fpm REST and nginx. The only reason I didn't go there with our recent Magento upgrade is the host I'm on still works ok and I really don't want to be sysadmin for my site again.
Also, did you know there is http://magento.stackexchange.com ? :D

Hosting: why does the number of MySQL databases matter?

Ok, maybe I'm missing something here but I'm looking at various PHP hosting options and I see things like "10 MySQL databases", or 25 or even unlimited.
Now I've worked on sites with an Oracle backend that have 10,000+ concurrent users and we've had... one database.
The idea of a database is, of course, that you can store whatever you want in it. So why is it for MySQL that the number matters? Is there some table, row or overall database limit I'm not aware of (entirely possible)? Or is it a question or concurrent connections? Or some other performance issue (eg sharding)? The sharding aspect seems unlikely because even basic hosting options (ie under $5/month) I see with 10 databases.
If someone could clue me in on this one, it'd be great.
It's mostly a marketing tactic, although there are some technical and historical considerations.
First, apologies if this is obvious, but SCHEMAs are to Oracle as DATABASES are to MySQL (in over simplified terms, a logical collections of tables).
The host is saying you can have XX number of configured logical databases on a server. Lots of web applications need a database to run. Modern web applications like Wordpress, Movable Type, Joomla, etc., will let you name your tables with a custom prefix. However, if an application doesn't have this configuration feature that means you need one database per install. Also, in a similar vein, if two applications have the same table name, they can't coexist in a single database. Lots of early web applications started out like this, so early on number of databases was an important feature to consider.
There's also access and security. While MySQL (and other databases) can be configured to give users fine grained access-control down to the table and column level, it's often easier to create one user who has full permission on a logical Database. This is important to people who sell services but pass off the actual hosting of completed sites/applications to the shared web-host.
Some people like one database per app
It's marketing, not technical. They want something to advertise. "10" sounds like a good number.
For development purposes, sometimes it's good to make a copy of your entire database to test new software against. Beats renaming all the tables in your code (although apps like Wordpress let you specify a prefix for all your table names in case you don't have the luxury of multiple DBs).
When I used shared hosting, I set up a separate database for each site/client for custom apps, and if you use Fantastico to install applications it will use a database for each one by default.
I believe the limits are there to prompt you to upgrade to the next tier of service when you outgrow the current level.
Nick is partially correct, but it also has to do with people who will try to host multiple sites on one shared account and will use a different database for each and a script to serve the correct content with a little dns masquerading.
Additionally its possibly a marketing perspective.
If you're only setting up databases for yourself, the low count is fine. but for commerical users, whom may want to have multiple sites for multiple clients on the one service, trying to cut corners, you're likely to need 1 Database ( or more ) per client/project.
So putting a limit on number of databases controls somewhat the variety services you offer, and potentially limits potential for your "resale" value, ie: to stop you buying 1 plan and then selling it on to somebody else, like "subleasing".
This is mainly for when you are hosting multiple sites on the same box. For me, I buy/sell a lot websites so I need to be able to keep each website as detached from the others as possible.