How do i merge multiple files together for a website? - html

I have a folder containing multiple files that compose of my website, but all different extensions, meaning that in order to access my website on another computer, the entire folder is needed(I might be wrong on this point though). This folder contains files that are jpeg, html, css, and png, and my website is written in html and css containing images of png and jpeg extension.
How do I merge multiple files together for a website?

To make your website publicly available, you need to upload it to a server.
It is important to understand how a client (like a browser) and a server interact:
client server
request for index.html ----------------------------->
<------------------------------ ok, here is index.html
reading index.html file
request styles.css --------------------------------->
<------------------------------ ok, here is styles.css
request image.jpg ---------------------------------->
<------------------------------- ok, here is image.jpg
So now that you can see that loading a website is a back-in-forth process, you can understand that the server needs each of these files.
So to answer your question, you just need to upload the entire folder to whichever host so that people who visit your site can have images and css.
Some options for free hosting are:
BitBalloon (easy, drag and drop folder)
Netlify (easy, drag and drop folder)
GitHub Pages (a bit harder)

Related

Can't change Apache2 shared folder's file

I'd like to change the page that shows what files I've uploaded. I never found the editable file. Can it be changed at all?? I have read a bunch article about this problem but I haven't found the solution.
I am talking about this page: Index of /--
Here is my shared folder: Location
Change the index file (probably: index.html or index.php) or add one yourself, if it does not exist yet. You can use .htaccess for example, if the directory or files inside should be access protected. You can also redirect the user when he is accessing the directory or a file inside.
The images that you have provided show the fallback display of a directory for apache.

How can I get wget to download all the pdf files from this website?

The website is: https://dgriffinchess.wordpress.com/
I already downloaded the entire website, but I'd also like to have the pdf files, and yes, I've tried this, this and this answer, and unless wget saves the pdf files other than the main site folder(the one at the Home directory), I don't see them downloading at all..(I don't wait until the command finishes, I just wait for a few minutes and see that no pdf file has been downloaded yet, and considering that there is one almost on every webpage, I conclude that the pdf's aren't getting downloaded)
I don't really care if I have to re-download the entire website again, it's not that big to begin with, what matters most to me are the .pdf files, which doesn't seem to download in any way..
Many thanks in advance
The PDF files are stored on another domain, dgriffinchess.files.wordpress.com.
To completely download this website along with the PDF files, you need to authorize the domain name where the PDF files are stored using --span-hosts and --domains=domain_a,domain_b:
wget --recursive --page-requisites --convert-links --span-hosts --domains=dgriffinchess.wordpress.com,dgriffinchess.files.wordpress.com https://dgriffinchess.wordpress.com/

html-minifier: Recursive but copying-over invalid files

I first met html-minifier today after running a small site I've created using Hugo through Google PageSpeed.
First thing I noticed is that although it does have recursion capabilities it stops working on unsupported files like images (my speakers started beeping and I freaked a little)
I've found this stack showing an apparently undocumented command-line option --file-ext
That worked perfectly but in the output directory, I noticed that the folders with the unmatching contents were gone.
From the directory root, I saw it was Hugo's folders for CSS, JS, images and Github Pages' CNAME file. Not only I can't tell for sure there's not even one piece of static file in any of the folders Hugo generated (you may know that Hugo is sometimes unpredictable) but also I would like to keep language specific XML Sitemaps I've created for some specific folders.
Long story short, is there a way to copy-over unmatching files "as is", keeping input directory ready for a commit/push?
After analyzing the whole directory structure I could be sure that within all the directory structure Hugo creates there are nothing more than HTML and XML files so then the Ockham's Razor took place.
Since both my Hugo's source code and output contents are in totally different directories, it was a simple matter of pointing the output directory to the same path of the input directory.
All HTML files are minified, overwriting those Hugo generated.

How to block processing of HTML file while an external files is being downloaded?

I am programming a webserver on an ESP8266 chip. The webserver loads a file named index.html when I try to open its homepage. This file contains links to several .js and .css files. From what I've read HTML files are processed sequentially. When the line containing the link to an external file is encountered, the client opens up a separate socket to the server and the file starts downloading. Meanwhile the client carries on processing the HTML file. So while loading the homepage the client opens several sockets, one for each file, and they are downloaded in parallel. This is causing the webserver to be overwhelmed with data, and the page doesn't load completely sometimes because the sockets are being prematurely closed down either by the client or by the server. Is there a way to halt the processing of a HTML page while a file is being downloaded, and continue once the file has been completely downloaded?
Sorry but this may be a very simplistic view (and apologies if I misunderstood the requirements) - are you not able to combine all the js and CSS files into one minified (for each)? Once that is done, you can load the css along with the HTML file and then the JS files.

change folder index to a HTML page within folder

I have seen a few examples with link to folder but i realy don't understant what it is or how to manipulate it or get it to set the specific html page within the folder.
My website is a basic one with only CSS and HTML
it is formatted as
[file]home.html // C:/Users/user/Desktop/mywebsite/home.html
[folder]Order // C:/Users/user/Desktop/mywebsite/order/
↳[file]ordersheet.html // C:/Users/user/Desktop/mywebsite/order/ordersheet.html
I want to try set the folder path C:/Users/user/Desktop/mywebsite/order/ as the file ordersheet.html C:/Users/user/Desktop/mywebsite/order/ordersheet.html how can this be done?
To set /order to ordersheet.html change the name of ordersheet.html to index.html
The index.html is the default file that the server will serve to the visitor when he visits that specific directory.
link text
link text = what you want it to say to the user
/Users/user/Desktop/mywebsite/order/ = directory path
Keep in mind that this will only work locally. If you have it up on a server, visitors don't have access to your full C:/ drive so you have to use relative links, i.e. just /order/
If I remebember correctly, you use something like this:
<a href="file:///C:/Users/user/Desktop/mywebsite/order/ordersheet.html>link to file on harddisk</a>
If you would want to have that anchor to a folder, you would just use this:
<a href="file:///C:/Users/user/Desktop/mywebsite/order/>link to a folder on harddisk</a>
Your browser is operating directly on your system's local filesystem, so you can't.
What you have been looking at is a function of a web server (I'll use Apache HTTPD for examples here).
A typical configuration of a web server would map the local part of the URI onto a directory on the local file system and just serve up the files there if they matched the local part of the URI.
If the local part resolves to a directory (rather than a file) then it would look for a file in that directory with a name that matched a list (typically including index.html) and serve up that file.
If none of the files on the list existed, then it would generate an HTML document containing links to all the files in the directory.
Since there is no web server involved when the browser is reading the local file system directly, there is no way to map the directory onto an index file, so you would need to explicitly include the filename in the URI (or switch to using a web server).