We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.
Related
i am planning to open a shared web hosting company. before opening i am configuring and checking that all things are up and running or not.
i had tried webmin, virtualmin and ajenti as web hosting manager on ubuntu server but i am not satisfied with them. is there any alternative to them which have secure administrator and client side control panel and easier to manager client account and hosts.
i am using apache2 as web server and mysql as database serve.
Thank You
Try ZPanel, it is cross-platform and has a great looking control panel. They also provide an installer which installs Apache, PHP, MySQL, and ZPanel, all pre-configured to run a shared hosting service.
Link
Getting shared hosting right isn't an easy thing to do especially if you want to allow your clients to use scripting languages like PHP. By default, PHP runs under the same userid regardless of which of customer the files belong to. So they will be able to see the files (including config files with database passwords) of the other customers.
There are ways around this problem, but most of them are either inconvenient for the customer or they bring other problems (like having to run the Apache as root).
Besides the shared hosting market is quite full with existing companies which have huge data centers and therefore can offer much more service at lower costs.
So my suggestion would be: Look at new services that you can provide. Docker Hosting or LXC Hosting isn't that common yet and you can better compete there.
If you really want to do simple shared hosting, and are in search of an admin tool: Try ISPCONFIG3
I've made a mobile native app with a feed system like Instagram/Twitter. In development mode I was just running a PHP/MySQL Apache local server, but now I need to publish the app and work with a real server. Which kind of server do I need? I just need to send http requests (JSON), loads of them!
Do I just need a hosting server like 1and1? (http://www.1and1.com/linux-web-hosting?__lf=Static)
But this one, it only has 1GB MySQL databases... not enough
Is there any kind of app/server whatever? Which kind of server does Instagram use?
These days lots of users are moving to the cloud.
Check out Amazon EC2: http://aws.amazon.com/ec2/
You can setup a micro instance server and it is very cheap to run tests on and get off the ground. Then if you like how it's running, you can simply upgrade to a more powerful server without having to re-install everything.
It also allows you to scale if your application gets really popular by just cloning the server.
Really worth checking out.
To those of you that are trying to be good little developers and version control their ExpressionEngine sites with git, how do you handle your database?
In my limited experience with multiple developers working on one ExpressionEngine site, we've had to all run off of a single MySQL development database running on a remote web server. For those of you that have tried this, it is PAINFULLY slow. Page loads can easily take 5-10 seconds making development extremely difficult. It would be quicker to work off of a remote development server. I am trying to steer away from working off of a remote MySQL server in order to be able to work from anywhere and not depend on Internet connection speed/quality.
Just wondering how others handle their MySQL databases.
Do all of your developers run off of one central database? Have you dealt with slowness issues like we have?
Do you keep your database under version control? How do you handle export/imports among multiple developers and multiple branches?
With one developer I can import/export/commit the database very easily but as soon as you add another developer to the mix, it gets very VERY muddy. Looking forward to hearing everyone's thoughts on this mammoth topic.
Thanks!
It seems there is a lot of time lost on failing DNS requests, with a remote database.
Start your MySQL server with start mysqld with --skip-name-resolve. (More information on this topic can be found here: http://dev.mysql.com/doc/refman/5.0/en/host-cache.html)
Having a remote database still seems to be the best way for us to work on a project with multiple developers.
I almost always use a central database for development. Depending which host you use, the speed difference may not be huge.
Obviously, if you're not making changes to the database, i.e. only doing template development, keeping the database in sync is not as needed, so you could potentially bring up a local copy of the database. You just have to remember to repeat any database changes, if you do end up making some.
As far as version control, I keep a copy of my base EE install's SQL file in my base repository. Other than that I don't usually keep copies of the database in Git, so I don't do a lot of importing/exporting, etc.
Have you looked at the EE Profiler recently? You'll probably notice in the neighborhood of 20-80 queries on your home page depending on it's complexity.
The problem is that, for each query, MySQL must execute a remote request for data, download the response, and then present ExpressionEngine it's data. The 20-80 round trips to the database is what's causing your delay and I don't think there is much you can do about it. When using a remote (outside our network) database, I get the same delay as you.
When MySQL is running on your machine or the production server, it doesn't have the added network requests causing latency in it's requests for data. This is the difference.
As for fixes, all you can do is move to a database hosted on your internal network. We have a Linux machine that mimics our production environment that we use for staging. Since it's on our network, we can use the local IP address in our database.php file. This is much faster.
The problem that we still have is the issue of channels/fields/entries. When a developer is working on a new section, they'll likely need to create a new channel and fields and/or new entries. When we're ready to push that functionality to production, we have to manually make those changes on the production server as there is no way to reliably export them. I am hopeful of this addon though---we'll see.
In my company (4 developers) we each run our own DB locally. But recently I tested Rackspace Cloud Databases (but there are other cloud db providers) for a heavy DB that could become difficult to run on a little laptop. It's relatively less expensive than running our own db server, and it can be setup or deleted in the minute.
I want to setup seperate amazon ec2 instance where i store all my images uploaded via my website by users. I want to be able to show images from this exclusive server. I know how to setup DNS names which would point to this server. But i would like to know how to setup the directories, for example if i refer to an image url as http://images.mydomain.com/images/sample.jpg, then
images.mydomain.com is the server name and
images should be the folder name
now the question is should a webserver be running on this server which is what will serve the images or can i just make images folder public so that it is visible to entire world? How do avoid directory listing?
Pointer to any documentation would be greatly appreciated.
It certainly is possible to set up a separate EC2 instances to serve your images. You may have good reasons to do that--for example, you may want to authorize only specific users or groups of users to access certain images, in a way that's closely controlled by program logic.
OTOH, if you're just looking to segment the access of image/media files away from the server that provides HTML/web content, you will get much better performance / scalability by moving those files to a service that is specifically tuned for storage and web access. Amazon's S3 (Simple Storage Service) is one relatively straightforward option. Amazon's CloudFront content distribution network (CDN) or a competing CDN would be an even higher performance option.
Using a CDN for file access does add the complexity of configuring the CDN, but if you're going to the trouble of segmenting media access from your primary web server, and if you're expecting any significant I/O load, I've found it to be a high-return-for-effort-expended approach.
I would definitely not implement this as you are planning. You should store all your images in an Amazon S3 bucket and serve them via Amazon's CloudFront CDN. Why go through the hassle of setting up and maintaining an EC2 instance to do what Amazon has already done? S3 provides infinite storage, manages permissions, metadata, etc. CloudFront provides fast access to your images, caching them at edge locations all around the world. Additionally, you can use Amazon Route 53 (or some other DNS service) to point various CNAMEs to your CloudFront distribution.
If you're interested in this approach I'd be happy to provide more info on how to set this up.
Yes, you will definitly need to run a webserver on the machine. Otherwise it will not bepossible for clients to connect via http/port 80 and view the images in a browser. This has nothing to do with directory listing enabled. Once you have a webserver running, you can disable directory listing in its configuration.
Install an apache on your server and run it (http://httpd.apache.org/docs/2.0/install.html). You then setup what's called a 'site' in its configuration which is pointing to a local directory which will then be the base directory for your server. It could, for example, be /home/apache on a Unix system. There you create your images folder. If your apache is setup correctly you can then access your images via http://images.mydomain.com/images/sample.jpg.
I'm running Windows 7 Pro and have a few servers running. One of the servers is a SSH / file server that was made via Cygwin. I already have logging setup internally using syslog-d; however, it does not provide adequate logging. When a user is connected to the server I can see him/her in the Windows 7 Resource Monitor and it shows his/her IP address as well as how much data is being sent/received. When a user is downloading a file from the file server I can also see in the resource monitor what file he/she is downloading by looking at the disk usage.
Herein lies the first question: How can I log users' IP address, the time they connect & disconnect, what files they download, and what their download speed was, to a database in MySQL?
In addition to the aforementioned server, I also use IIS to host a website, and would like to have some sort of networking logging.
If I could find a tool that would work for both of these servers that would be the best solution.
I did some searching and found a program called Snort that looks like it would work for the network side of things, but not for the disk usage. I'm not familiar with this program at all, but at first glance maybe it could accomplish part of what I want to do? Maybe there is an easier/better way?
I'm pretty new to MySQL and know very little about network and disk logging so any and all help and guidance would be much appreciated. Thanks!
Advanced Web Statistics does a pretty good job of making sense of the IIS log files, and though it will give you more information than you need, it will certainly give you the information you want. It is open source, and my hosting provider uses it for the ASP.NET sites I have developed.
As far as logging the information to MySQL:
I am assuming that you already have, or know how to get the information and you simply want to log it to a MySQL DB.
1st, you will need to create the database.
2nd, you need the MySQL connector for your programming language of choice. The MySQL ADO.NET connector is excellent and easy to use. I am also assuming you know at least one programming language and how to connect it to a database. If not, I recommend C# with ADO.NET-- it is super easy and there are plenty of tutorials online.
3rd, write a program to send your information to the database, when you receive it.