Web access HTML file in my Gitlab repo - html

I want to use Gitlab to manage web-application development. Is it possible to access the html file I have created in my Gitlab repo from the browser?
Currently there are ssh/http url for access to the repo like:
ssh: git#something.some.ca:balbal/web-app.git
http: (ht tps://something.some.ca:balbal/web-app.git)
When I access https from a browser it will just jump into the git repo manage UI (like show you all the commits, branches and detail files)
What I want is web access to a particular html file I have created in my repo (like if there is a index.html file in a folder called 'www' in my repo). I want some URL that I can type into the browser and which will show me the index.html content.
Is it possible for me to set up an web access to these html files?

As of now, Gitlab does not support this functionality. There's a feature request for this: http://feedback.gitlab.com/forums/176466-general/suggestions/5599145-preview-render-static-html-pages-pushed-to-repos
Currently, if you query Gitlab for the raw html file, it sets certain HTTP headers to make it render as text/plain instead:
$ curl -I http://my-gitlab/user/project/raw/dev/doc/_book/index.html
HTTP/1.1 200 OK
Server: nginx
Date: Mon, 20 Apr 2015 13:17:48 GMT
Content-Type: text/plain; charset=utf-8
Connection: keep-alive
Status: 200 OK
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-UA-Compatible: IE=edge
Content-Disposition: inline; filename="index.html"
Content-Transfer-Encoding: binary
Cache-Control: private
ETag: "b81191c550c47eae1ab4adf72dfd0c92"
Set-Cookie: request_method=HEAD; path=/
X-Request-Id: 04ae0499-2fdf-4f89-82ab-8392a8d6a076
X-Runtime: 0.019857

Fortunately, with GitLab 10.1, online display of HTML files is now officially supported.
See the documentation for more details.
With GitLab 10.1, we introduce the online visualization of HTML files created by pipelines for public projects, just one click away from the artifacts browser view.

You have to add a file called .gitlab-ci.yml at the root of your project containing
pages:
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
only:
- master
Then access your file there: https://gitlab.org/project-name/filename.html
Source: https://roneo.org/en/framagit-render-html/
Update: see also the project raw.githack

For quick debugging/testing purposes you may use the FireFox PourBico plugin.
Change the response header to text/html
Avoid doing this on the public GitLab, do this on your own GitLab deployment, GitLab was not meant to be hacked like this.
Also see Github pages, HTTP headers

For Chrome, one can use extensions such as Header Hacker and set the headers to render HTML by changing the content-type in the browser. As mentioned by Christophe Roussy this is a hack and you should really have a good reason for doing this.

Related

save and serve bundle.js as gzip version only

I have a react app and save the bundle.js on a CDN (or S3 for this example)
on save I run gzip -9
on upload to CDN / S3 i add headers: Content-Encoding: gzip
now each time a browser / http client will download the bundle it will get:
curl -I https://cdn.example.com/bundle.min.js
HTTP/2 200
content-type: application/javascript
content-length: 3304735
date: Wed, 27 Feb 2019 22:27:19 GMT
last-modified: Wed, 27 Feb 2019 22:26:53 GMT
content-encoding: gzip
accept-ranges: bytes
this works fine if I test this in a browser. my only concern is that now we only save a gzip version of the js bundle and users will get it regardless of sending over the Accept-Encoding: gzip in the request
I cant think of any issues this will cause for browsers but I might be missing something.
Is it a bad practice to "enforce" gzip in the response for the bundle.js file ?
Its several month late, but the answer may still be relevant to someone trying the same thing.
Pre-compression with high compression settings is something that can help with saving few more percentages of bandwidth on static resources. However, forcing a single encoding may create problem for some users. As per a study conducted in 2010, ~15% of users with gzip-capable browsers were not sending an appropriate Accept-Encoding request header. The reason being anti-virus software or intermediaries/proxies striping Accept-Encoding header to force the server to send the content in plain-text form.
If you need a CDN that covers the gap, PageCDN serves exactly the same purpose, but offers superior brotli-11 compression, and falls back to gzip for browsers that do not support brotli. You do not need to connect to S3 or any external storage. Just connect to your website, github, or manually upload files to CDN and configure the compression level for files.

How to serve a static JSON file from Phoenix with charset utf-8 for Filestack

I have a Phoenix app and on the javascript side I use the Filestack client. Filestack requests a JSON file from my server. I had put the file in my asset directory and it gets loaded but the Filestack Javascript client crashes with an error because it can't read the json do to german umlauts (öäü). I looked at the header and it gets served like this Content-type: application/json. I think what I need is Content-type: application/json; charset=utf-8. I also use webpack2 btw.
How do I accomplish this?
Plug.Static uses the mime package to set the content-type header. You can override the value for json as described in the mime package's README. Make sure your app is using mime version 1.1.0 or later because the builtin mime types were not overridable due to a bug that was fixed in 1.1.0.
Add this to config/config.exs:
config :mime, :types, %{"application/json; charset=utf-8" => ["json"]}
Then, force recompile mime:
mix deps.clean --build mime
and then start Phoenix:
mix phoenix.server
After this, the content-type of .json files served by Plug.Static should be application/json; charset=utf-8:
$ curl -I localhost:4000/js/foo.json
HTTP/1.1 200 OK
server: Cowboy
date: Sat, 18 Feb 2017 14:36:51 GMT
content-length: 3
cache-control: public
etag: 8EA91E
content-type: application/json; charset=utf-8

How to debug mod_perl with reverse proxy of mod_proxy?

I'm new to this with a very low background so bear with me. I want to debug my issue of prematurely terminating loading the page from the reverse proxied site as it request something like filename.json file. I noticed in the request header it has Content-Type:application/x-www-form-urlencoded, and the response headers are
$WSEP:
Cache-Control:no-cache
Connection:Keep-Alive
Content-Encoding:gzip
Content-Language:en-US
Content-Length:55
Content-Type:text/html;charset=UTF-8
Date:Sun, 12 Jun 2016 04:54:18 GMT
Expires:Thu, 01 Jan 1970 00:00:00 GMT
Keep-Alive:timeout=5, max=97
Server:Apache
Vary:Accept-Encoding
X-Cnection:Close
but I'm wondering why it's text/html content-type when it's json, so I tried manipulating the response with application/json and even trying to instead of downloading the page from the remote server, I made it download locally from my server /var/www/html by creating dummy json file. I was thinking because this is json file it should be application/json not text/html as Content-Type. But the issue still exists. It just stops when it reached requesting that json file.
Now my question is, how can I make sure that requesting to that json is the culprit? Or more appropriately, how can I debug the mod_perl. I tried adding
use warning;
in the code
PerlWarn On
in the apache config but still I don't see any warning/errors that will guide me to debug.
Can you give me advice on how to do find the fault?
By the way, the site of that json file being requested returns 404 but I assume that the proxy server should ignore it and continue downloading other files.

Why isn't the browser loading cdn file from cache?

Here is a very simple example to illustrate my question using JQuery from a CDN to modify the page:
<html>
<body>
<p>Hello Dean!</p>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js"></script>
<script>$("p").html("Hello, Gabe!")</script>
</body>
</html>
When you load this page with an internet connection, the page displays "Hello Gabe". When I then turn off the internet connection, the page displays "Hello Dean" with an error -- JQuery is not available.
My understanding is that CDNs have a long Cache-Control and Expire in the header response, which I understand to mean that the browser caches the file locally.
$ curl -s -D - https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.3/jquery.min.js | head
HTTP/1.1 200 OK
Server: cloudflare-nginx
Date: Fri, 17 Apr 2015 16:30:33 GMT
Content-Type: application/javascript
Transfer-Encoding: chunked
Connection: keep-alive
Last-Modified: Thu, 18 Dec 2014 17:00:38 GMT
Expires: Wed, 06 Apr 2016 16:30:33 GMT
Cache-Control: public, max-age=30672000
But this does not appear to be happening. Can someone please explain what is going on? Also -- how can I get the browser to use the copy of JQuery in the cache somewhere?
This question came up because we want to be using CDN's to serve external libraries, but also want to be able to develop the page offline -- like on an airplane.
I get similar behavior using Chrome and Firefox.
There's nothing to deal with CDN. When then browser encounters a script tag, it will request it to the server, whether it's hosted on a CDN or on your server. If the browser previously loaded it, on the same address, the server tells whether it should be reloaded or not (sending 304 HTTP status code).
What you are probably looking for is to cache your application for offline use. It's possible with HTML5 cache manifest file. You need to create a file listing all files needed to be cached for explicit offline use
Since the previous answer recommended using a cache manifest file, I just wanted to make people aware that this feature is being dropped from the web standards.
Info available from Mozilla:
https://developer.mozilla.org/en-US/docs/Web/HTML/Using_the_application_cache
It's recommended to use Service workers instead of the cache manifest.

Chrome freezes when playing videos from Rackspace cloudfiles

Can't seem to get chrome to play videos with html5 video tag when I host them on a Rackspace cloudfiles server.
Works perfectly on the regular hosting, but as soon as I link the video with the rackspace cdn url, Chrome freezes (total freeze, website UI completely blocked - after a while Chrome pops a message saying "The following page has become unresponsive bla bla bla").
The video file is fine as it's same as when I link to the regular hosting.
Did a bit of spying on the requests, and I initially thought the problem was that the webm files were serverd by default as application/octet-stream mime-type. I lodged a ticket to rackspace and they gave me a way to force the mime-type when uploading the file. Did that, and the file is now correctly sent as video/webm.. but Chrome still freezes.
Any idea what could be going wrong here?
EDIT: using iheartvideo, loading the video from rackspace triggers a MEDIA_ERR_SRC_NOT_SUPPORTED. Same video off local web server works totally fine (??)
EDIT 2: Happens on both mac and windows with latest mainstream chrome
EDIT 3: curl -I results:
Rackspace (no worky):
HTTP/1.1 200 OK
Server: nginx/0.7.65
Content-Type: video/webm
Last-Modified: Thu, 24 Feb 2011 23:45:12 GMT
ETag: 7029f83b241aa691859012dfa047e20d
Content-Length: 20173074
Cache-Control: public, max-age=900
Expires: Fri, 25 Feb 2011 01:32:11 GMT
Date: Fri, 25 Feb 2011 01:17:11 GMT
Connection: keep-alive
Web Server (worky)
HTTP/1.1 200 OK
Date: Fri, 25 Feb 2011 01:17:51 GMT
Server: Apache
Last-Modified: Thu, 24 Feb 2011 03:56:26 GMT
ETag: "11a0b47-133d112-49cff32940e80"
Accept-Ranges: bytes
Content-Length: 20173074
Content-Type: text/plain
EDIT 4: For those interested, this is what the rackscape crew told me to do to set a webm content type on a file:
The file browser is not smart enough
to determine the content type
video/webm. Unfortunately, there is
not a way to change the content type
of a file that has already been
uploaded.
You'll need to use one of the API to
re-upload your files with the correct
content type.
You can also use curl from a
linux/MacOS command line if available.
Using your username and api key run
this command...
curl -I -X GET -H "X-Auth-User: USERNAME" -H "X-Auth-Key: API_KEY" https://auth.api.rackspacecloud.com/v1.0
From the output there are 2 important
values.
X-Storage-Url: https://storage101.......
X-Storage-Token: Long hash
You can upload the files with,
curl -X PUT -T test.webm -H "Content-Type: video/webm" -H "Content-Length: FILESIZEINBYTE" -H "X-Auth-Token: TOKEN FROM RESPONSE ABOVE" https://STORAGE URL FROM RESPONSE ABOVE/test.webm
You must specify the content type, and
you must give the correct length of
bytes of what is being uploaded. If
not you will get an invalid request
error.
I work quite a bit with the Rackspace API.
Their API actually allows you to set a container as streaming enabled. My first instinct tells me that you haven't done this.
I stream various file types and they all work an absolute treat.
There is more information about CDN Streaming Enabled containers here:
http://docs.rackspace.com/files/api/v1/cf-devguide/content/Streaming-CDN-Enabled_Containers-d1f3721.html
Hopefully this helps, but if not let me know and I don't mind putting some PHP example code together to help you along.
Its all quite easy, but getting your head around the various API operations that Rackspace have implemented can sometimes be a daunting task.
I don't have a concrete answer, just some thoughts:
what about in other browsers?
it works on the web server with the content type above is text/plain so why force to video/webm?
can Rackspace provide you (or can you find on their site or someone else's) some sample content that does play so you can inspect it?
You could try the free trial from Brightcove or Bitgravity and see if that works...