I want to setup seperate amazon ec2 instance where i store all my images uploaded via my website by users. I want to be able to show images from this exclusive server. I know how to setup DNS names which would point to this server. But i would like to know how to setup the directories, for example if i refer to an image url as http://images.mydomain.com/images/sample.jpg, then
images.mydomain.com is the server name and
images should be the folder name
now the question is should a webserver be running on this server which is what will serve the images or can i just make images folder public so that it is visible to entire world? How do avoid directory listing?
Pointer to any documentation would be greatly appreciated.
It certainly is possible to set up a separate EC2 instances to serve your images. You may have good reasons to do that--for example, you may want to authorize only specific users or groups of users to access certain images, in a way that's closely controlled by program logic.
OTOH, if you're just looking to segment the access of image/media files away from the server that provides HTML/web content, you will get much better performance / scalability by moving those files to a service that is specifically tuned for storage and web access. Amazon's S3 (Simple Storage Service) is one relatively straightforward option. Amazon's CloudFront content distribution network (CDN) or a competing CDN would be an even higher performance option.
Using a CDN for file access does add the complexity of configuring the CDN, but if you're going to the trouble of segmenting media access from your primary web server, and if you're expecting any significant I/O load, I've found it to be a high-return-for-effort-expended approach.
I would definitely not implement this as you are planning. You should store all your images in an Amazon S3 bucket and serve them via Amazon's CloudFront CDN. Why go through the hassle of setting up and maintaining an EC2 instance to do what Amazon has already done? S3 provides infinite storage, manages permissions, metadata, etc. CloudFront provides fast access to your images, caching them at edge locations all around the world. Additionally, you can use Amazon Route 53 (or some other DNS service) to point various CNAMEs to your CloudFront distribution.
If you're interested in this approach I'd be happy to provide more info on how to set this up.
Yes, you will definitly need to run a webserver on the machine. Otherwise it will not bepossible for clients to connect via http/port 80 and view the images in a browser. This has nothing to do with directory listing enabled. Once you have a webserver running, you can disable directory listing in its configuration.
Install an apache on your server and run it (http://httpd.apache.org/docs/2.0/install.html). You then setup what's called a 'site' in its configuration which is pointing to a local directory which will then be the base directory for your server. It could, for example, be /home/apache on a Unix system. There you create your images folder. If your apache is setup correctly you can then access your images via http://images.mydomain.com/images/sample.jpg.
Related
I'm an high school student and I'm working on something for fun. I've linked a local file stored on my computer to my webpage. What can I do to make it possible for other devices to access the local html file? (meme1.html)
<div id="button">
<a href="C:\Users\Desktop\MEME GENERATOR\meme1.html">
<img src="https://openclipart.org/image/2400px/svg_to_png/140365/1306313012.png" alt="Click here!" height="20%" width="20%"></a>
</div>
<div id="wrapper">
<h1><span class="tight-2">Happy Birthday!</h1>
<h2>Go ahead, press the button to generate memes!<span class="tm">™</span>.</h2>
</div>
Basically, when you host the site online you have to change the linked file path to the one of the server instead of your local machine.
Edit: If youre using plain HTML my answer stands, if you use a backend platform like Django, Flask or dotNet Core then the urls are dynamicly stated in your webapp.
By default, the local file is only available to the system it resides on. For instance this link you've created:
<a href="C:\Users\Desktop\MEME GENERATOR\meme1.html">
is telling the browser to look in the C: drive of the machine it's currently installed on. Every other system in your network (and the world, for that matter) will likely not be to pull that file because MEME GENERATOR isn't a folder on their system, so they will see a 404 (file not found) error.
That said, you can load links within your network by using network addresses. This will be the machine's network IP address, typically starting with 192.168.
That said, in order to load the file, the machine that it is running on will need to have a port open for the client machine's browser to connect to. This is typically port 80, unless SSL is in use, in which case it's typically port 8080, for HTTP traffic.
In doing so, the computer that is serving up the files becomes, logically, a 'server'. And this is the core of the client (user) to server relationship that the whole of the internet and networking is built upon.
Since you're on Windows, you can use something like XAMPP or WAMP to run a server locally that will have Apache installed, which can serve files through these ports. You're going to need to read up on these technologies a lot to get a file going, and be forewarned that this will open your system to hacking and the like.
EDIT: rereading your question, you are maybe trying to get this file to load on your website? If this is the case, then you need to upload the file to your website, and then it will have a folder structure similar to a local Windows file. [YOUR.DOMAIN.COM]/[whatever folder you create on your server in the public directory]/meme1.html
Do you mean? That you want other people to access your website?
Few ways to do that.
One thing you could do is to send the whole directory to the individual to who you want to send the webpage to.
Or the other way is what you can do is host the webpage on a hosting website. There are a lot of hosting websites that would host your websites for free.
That way anyone with a given URL can access the website.
i am planning to open a shared web hosting company. before opening i am configuring and checking that all things are up and running or not.
i had tried webmin, virtualmin and ajenti as web hosting manager on ubuntu server but i am not satisfied with them. is there any alternative to them which have secure administrator and client side control panel and easier to manager client account and hosts.
i am using apache2 as web server and mysql as database serve.
Thank You
Try ZPanel, it is cross-platform and has a great looking control panel. They also provide an installer which installs Apache, PHP, MySQL, and ZPanel, all pre-configured to run a shared hosting service.
Link
Getting shared hosting right isn't an easy thing to do especially if you want to allow your clients to use scripting languages like PHP. By default, PHP runs under the same userid regardless of which of customer the files belong to. So they will be able to see the files (including config files with database passwords) of the other customers.
There are ways around this problem, but most of them are either inconvenient for the customer or they bring other problems (like having to run the Apache as root).
Besides the shared hosting market is quite full with existing companies which have huge data centers and therefore can offer much more service at lower costs.
So my suggestion would be: Look at new services that you can provide. Docker Hosting or LXC Hosting isn't that common yet and you can better compete there.
If you really want to do simple shared hosting, and are in search of an admin tool: Try ISPCONFIG3
We currently have an application that runs on one dedicated server. I'd like to move it to OpenShift. It has:
A public-facing web app written in PhP
A Java app for administrators running on Wildfly
A Mysql database
A filesystem containing lots of images and documents that must be accessible to both the Java and PhP apps. A third party ftp's a data file to the server every day, and a perl script loads that into the db and the file system.
A perl script occasionally runs ffmpeg to generate videos, reading images from and writing videos to the filesystem.
Is Openshift a good solution for this, or would it be better to use AWS directly instead (for instance because they have dedicated file system components?)
Thanks
Michael Davis
Ottawa
The shared file system will definitely be the biggest issue here. You could get around it by setting up your applications to use Amazon S3 or some other shared Cloud file system though fairly easily.
As for the rest of the application, if I were setting this up I would:
Setup a scaled PHP application, even if you set the scaling to just use 1 gear this will allow you to put the MySQL database on it's own gear, and even choose a different size for it, such as having medium web gears (that run php) and a large gear that runs the MySQL database. This will also allow your wildfly gear to access the database since it will have a FQDN (fully qualified domain name) that any of your applications on your account can reach. However, keep in mind that it will use a non-standard port instead of 3306.
Then you can setup your WildFly server as whatever size you want, but, keep in mind that the MySQL connection variables will not be there, you will have to put them into your java application manually.
As for the perl script, depending on how intensive it is, you could run it on it's own whatever sized gear with some extra storage, or you could co-locate it with either the php or java application as a cron job. You can have it store the files on Amazon S3 and pull them down/upload them as it does the ffmpeg operations on them. Since OpenShift is also hosted on Amazon (In the US-EAST region) these operations should be pretty fast, as long as you also put your S3 bucket in the US-EAST region.
Those are my thoughts, hope it helps. Feel free to ask questions if you have them. You can also visit http://help.openshift.com and under "Contact Us" click on "Submit a request" and make sure you reference this StackOverflow question so I know what you are talking about, you can ask any questions you might have and we can discuss solutions for them.
quick question to day. I've done a little digging around on the net and i can't really find a very definitive answer.
Basically, I run my own server on a redundant dual core, 4gb ram 2Tb pc (server1)
And on here, i would like to make an FTP partition. Reason being, i would very much like to be able to transfer files back and forth work, uni and home as i please.
I also run a website from my server which allows me to stream media from my hard drive to any laptop, tablet, desktop, iphone, android.. you name it!
I would LIKE to be able to add a section on my website where by I can log in and access my files as a sort of HTMF5 Front end.
I am aware and know how to create a login with a database which has md5 hash and store cookies to stop un-authorised people accessing my ftp.
Any help or a shove in the right direction would be much appreciated! Thanks in advance :D
Yes it's possible. but that won't be HTML5 ftp server etc that you mentioned.
You can achieve this by installing a web server on your machine like apache and then make directories public - run Apache on some port and you will be able to access the directory. if your server is running on port 8080, URL will be like: domain.com:8080 - You can style directory using this simple script & make this password protected as well using .htaccess .
osFileManager
The other option is to use some php script. Many commercial scripts are available and as well as open source. i recommend you trying osFileManager - it has a lot of features like:
Browse the directory structure
Create files
Upload files
Rename files
Move files
Delete files
Edit files
Change permissions
Change password
Create users
Here is it's installation instructions: http://www.osfilemanager.com/osfilemanager-docs.html
or a paid HTML5 & AJAX based script can be bought for 14$ from here:
http://codecanyon.net/item/file-manager-and-backup-system/5177206
I've just made an upgrade and now partly thinking on web-server directory structure for local workstation for web-development on linux platform. Running multiple hosts and different projects required. Where is it better to put all the server's docroots? /var/www? /srv? /www? I plan to make it as separate partition - could it be good for backups? :) I'm looking forward to your thoughts on this.
For development, you could put the files anywhere - perhaps in your home directory (you can allow Apache to serve files from your home directory by setting UserDir enabled in the Apache configuration: see http://httpd.apache.org/docs/2.1/mod/mod_userdir.html).
For production, /srv/www is probably the best place for the files; this is (loosely) defined in the FHS: http://www.pathname.com/fhs/pub/fhs-2.3.html#SRVDATAFORSERVICESPROVIDEDBYSYSTEM
Additionally, /srv/www is typically (certainly on Fedora, for example) one of the locations that is regarded by SELinux as web content, which allows Apache to read the files.
Under /srv is the proper place for service data files. Making it a separate volume is not necessary, provided the volume it's on is relatively safe from becoming full.
It is well explained in the Filesystem Hierarchy Standard. It should go in /srv.