Cesium-terrain-builder-docker error when i load cesium.js in browser - cesiumjs

I want to serve custom terrain data at my server, so I tried Cesium Terrain Builder Docker (cesium-terrain-builder-docker).
(maybe) Completely generate quantized-mesh-terrain using cesium-terrain-builder docker, but error occurs when loading cesium.js in browser
Generated quantized-mesh terrain using cesium-terrain-builder:
This is my HTML code:
and this is error message at Chrome browser console:

Change
url: 'http://localhost:8080/tilesets/daegu/tiles'
to url: 'http://localhost:8080/tilesets/tiles' in the terrainProvider.

Can you please provide some more information on how you serve the tileset?
Are you using a normal web server or a dedicated service for publishing tilesets like
CesiumTerrainServer?
In general I see two possible sources for this error:
Terrain tiles serving path is wrong:
In that case, try opening your layer.json file in the browser, e.g. open https://localhost:8080/tilesets/daegu/tiles/layer.json. If that fails, you can be pretty sure there is something wrong with the path. Check your path and the documentation of the tileset provider service to fix that. For CesiumTerrainServer this is described here.
Tiles are served with wrong content encoding:
This usually only applies if you serve your tiles directly from a web server like Nginx or Apache. Cesium terrain tiles are gzipped and have to be served using gzip Content-Encoding. Try adding this header to the web server location you serve the tiles from, e.g. using Nginx:
location ~* \.terrain$ {
add_header Content-Encoding gzip;
}
The full example is available here.
Here is an example on how to use Docker to run CesiumTerrainServer with some documentation, that might be helpful as well.

Related

Cesium terrain not loading from CDN

I created a custom terrain using Cesium Terrain Builder docker and am trying to serve it from a standard CDN or cloud bucket. I've uploaded all the terrain folders to the CDN and they are correctly accessible, e.g.
https://mycdn.com/terrains/terrain1/layer.json
https://mycdn.com/terrains/terrain1/0/0/0.terrain
https://mycdn.com/terrains/terrain1/0/1/0.terrain
and so on - I can access all the files from a browser.
But, when trying to access them from my Cesium app, I don't see the terrain (i.e. transparent environment). Checking the network tab in Chrome, I can see layer.json and the root terrain files have been accessed successfully. There are no errors in the console log. It just doesn't show up.
Any idea why?
p.s. same data loads fine from a Cesium Terrain Server container...
Found the cause. The tiles are gzipped. The CDN need to be configured so that every tile file is served with Content-Encoding: gzip header.
Once that's done, terrain loads and I can see all tiles get loaded in chrome (not just the root tiles)

IPFS X-Ipfs-Path on static images referenced on a dynamic non-IPFS https page forces localhost gateway to load over https

I'm trying to utilize IPFS to load static content, such as images and javascript libraries, on a dynamic site loaded on the http protocol.
For example https://www.example.com/ is a normal web 2.0 page, with an image reference here https://www.example.com/images/myimage.jpg
When the request is made on myimage.jpg, the following header is served
x-ipfs-path: /ipfs/QmXXXXXXXXXXXXXXXXX/images/myimage.jpg
Which then gets translated by the IPFS Companion browser plugin as:
https://127.0.0.1:8081/ipfs/QmXXXXXXXXXXXXXXXXX/images/myimage.jpg
The problem being, is that it has directed to an SSL page on the local IP, which won't load due to a protocol error. (changing the above from https to http works)
Now, if I were to request https://www.example.com/images/myimage.jpg directly from the address bar, it loads the following:
http://localhost:8081/ipfs/QmYcJvDhjQJrMRFLsuWAJRDJigP38fiz2GiHoFrUQ53eNi/images/myimage.jpg
And then a 301 to:
http://(some other hash).ipfs.localhost:8081/images/myimage.jpg
Resulting in the image loading successfully.
I'm assuming because the initial page is served over SSL, it wants to serve the static content over SSL as well. I also assume that's why it then uses the local IP over https, rather than localhost in the other route.
My question is, how do I get this to work?
Is there a header which lets IPFS companion to force it to load over http? If so, I'm assuming this would cause browser security warnings due to mixed content. I have tried adding this header without luck: X-Forwarded-Proto: http
Do I need to do something to enable SSL over 127.0.0.1, connecting this up with my local node? If so, this doesn't seem to be the default setup for clients, and worry that all the content will show broken images if they do not follow some extra steps.
Is it even possible to serve static content over IPFS from non-IPFS pages?
Any hints appreciated!
Edit: This appears to effect the Chrome engine and and Firefox.
Looks like a configuration error on your end.
Using IPFS Companion with default settings on a clean browser profile works as expected.
Opening: https://example.com/ipfs/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi redirects fine to http://localhost:8080/ipfs/bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi which then redirects to unique Origin based on the root CID: http://bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi.ipfs.localhost:8080/
You use custom port (8081), which means you changed the Gateway address in ipfs-companion Preferences at some point.
Potential fix: go there and make sure your "Local gateway" is set to http://localhost:8081 (instead of https://).
If you have http:// there, then see if you have some other extension or browser setting forcing https:// (check if this behavior occurs on a browser profile, and then add extensions/settings one by one, to identify the source of the problem).

Google maps not working in https://

I am using google maps on http, it is working perfectly fine. But when i installed ssl certificates over the same, it stopped working. It is giving me errors
Mixed Content: The page at 'https://url' was loaded over HTTPS, but
requested an insecure script
'http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js?_=1***************'.
This request has been blocked; the content must be served over HTTPS.
UPDATE: On May the 12th 2016 Google decommissioned the google-maps-utility-library-v3.googlecode.com source for this library. However, since Google moved the source over to GitHub a while back, please consider the GitHub details covered at the end of this post and, in particular, the final note regarding including the script and resources directly in your project
In addition to changing your script inclusion url from:
http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js
to:
https://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js
you'll also need to specify the imagePath option when instantiating your MarkerClusterer along the following lines:
var mc = new MarkerClusterer(map, markers, {
imagePath: 'https://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m'
});
This will avoid the following warning which covers the same ground as the script error you've highlighted:
Mixed Content: The page at 'https://url' was loaded over HTTPS, but requested an insecure image 'http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/images/m1.png'. This content should also be served over HTTPS.
The reason this occurs is that, by default, the MarkerClusterer library uses the following non https setting as the root for its cluster images (m1.png, m2.png etc.):
MarkerClusterer.prototype.MARKER_CLUSTER_IMAGE_PATH_ =
'http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/' +
'images/m'
Whilst we encountered this issue a while back, it does appear to have been addressed in response to the following pull request on the library's GitHub repository:
Changed HTTP to HTTPS in image link
This GitHub version can be accessed from RawGit by using the following script url:
https://cdn.rawgit.com/googlemaps/js-marker-clusterer/gh-pages/src/markerclusterer.js
and the following imagePath can be used to access the GitHub images:
var mc = new MarkerClusterer(map, markers, {
imagePath: 'https://cdn.rawgit.com/googlemaps/js-marker-clusterer/gh-pages/images/m'
});
Whilst the above urls (with the cdn prefixes) have no traffic limits or throttling and the files are served via a super fast global CDN, please bear in mind that RawGit is a free hosting service and offers no uptime or support guarantees.
This is covered in more detail in the following SO answer:
Link and execute external JavaScript file hosted on GitHub
This post also covers that, if you're linking to files on GitHub, in production you should consider targeting a specific release tag to ensure you're getting the desired release version of the script.
However, as the custodians of the js-marker-clusterer repository have yet to create any releases, this isn't currently possible.
As a result, you should seriously consider downloading and including the library and its resources directly in your project for production purposes.
If you access your website through https, all content that it serves must come from https as well. That includes images, stylesheets and JS scripts. Just change http to https in the URL.
I faced this problem today for Marker cluster library, so I had to update the images directory manually from the js source file, open markercluster.js
and replace:
https://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer
With the directory in github:
https://googlemaps.github.io/js-marker-clusterer
And you should be fine..
Check the script inclusion url for google maps and remove the http protocol from the url:
http://google-maps-utility-library-v3.googlecode.com/svn/trunk/mar...
will become
//google-maps-utility-library-v3.googlecode.com/svn/trunk/mar...
in this manner the script will be served using the correct protocol (http or, in your case, https)
Just change the Google http:// to https://
http://maps.google.com/ to https://maps.google.com/
add following meta in head
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
solved my problem.

CDN library blocked in Chrome

I'm using leaflet/OSM for a small map on a site. The site itself is accessible via HTTPS while the leaflet library can (afaik) only be retrieved via a HTTP connection. Now Chrome doesn't include the library and gives me the following message in the console:
[blocked] The page at https://example.com/foo/bar ran insecure content from http://cdn.leafletjs.com/leaflet-0.5/leaflet.css_
Any idea how I could work around this?
http://cdnjs.com/libraries/leaflet/ has Leaflet. They have HTTPS version as well.
//cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.3/leaflet.css
//cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.3/leaflet.js
Use the above URLs in your code. The same protocol used to load the current page will be used to fetch Leaflet assets as well.
Instead of using the hosted version of leaflet, you can provide the necessary javascript and css files yourself: Just grab the latest version of leaflet at http://leafletjs.com/download.html and copy the directory "dist" to your project directory. After that you can change the links from "http://cdn.leafletjs.com/leaflet-0.5/" to "./dist/".
Remove the "http:" from your reference. try "//cdn.leafletjs.com/leaflet-0.5/leaflet.css". It will use the current page's protocol to send the request.
Tiles are downloaded on CDN via HTTP, so there is little help in putting js/css in SSL

Restlet - serving up static content

Using Restlet I needed to serve some simple static content in the same context as my web service. I've configured the component with a Directory, but in testing, I've found it will only serve 'index.html', everything else results in a 404.
router.attach("/", new Directory(context, new Reference(baseRef, "./content"));
So... http://service and http://service/index.html both work,
but http://service/other.html gives me a 404
Can anyone shed some light on this? I want any file within the ./content directory to be available.
PS: I eventually plan to use a reverse proxy and serve all static content off another web server, but for now I need this to work as is.
Well, I figured out the issue. Actually Restlet appears to route requests based on prefix, but does not handle longest matching prefix correctly, it also seems to ignore file extensions.
So for example, if I had a resource attached to "/other"... and a directory on "/". And I request /other.html, what actually happens is I get the "/other" resource. (The extension is ignored?), and not the static file from the directory as I would expect.
If aynone knows why this is, or how to change it, I'd love to know. It's not a big deal, just for testing. I think we'll be putting apache or nginx in front of this in production anyway.
Routers in Restlet by default use the "Template.MODE_STARTS_WITH" matching mode. You could always set the Router default by doing router.setMatchingMode(Template.MODE_EQUALS). This will turn on strict matching by default for attach. You can choose to override individual routes with setMatchingMode.
Good documentation on Restlet Routing