OpenLayers 5: stroke width scaling? - gis

Is it possible to make Stroke width dependent on the zoom level?
Basically, I am going to use LineStrings/MultiLineStrings to highlight some roads but I would also like to be able to zoom out and not have massive clutter (there will be about 8 pretty wide lines along each path).

You can use the resolution which is passed to a style function. I've use this code to display contours, setting lines at multiples of 50m wider, and when the resolution is greater than 2.5 both widths are reduced proportionately.
style: function(feature, resolution) {
return new ol.style.Style({
stroke: new ol.style.Stroke({
color: 'rgba(224,148,94,1)',
width: (feature.getProperties().value % 50 == 0 ? 3.175 : 1.863) * Math.min(1, 2.5/resolution)
})
});
}

Related

`Autodesk.Viewing.ScreenShot.getScreenShotWithBounds` uses viewer's default camera dimensions instead of bounds

Following this advice I am trying to replace our usage of viewer::getScreenshot with Autodesk.Viewing.ScreenShot.getScreenShotWithBounds which calls Autodesk.Viewing.ScreenShot.getScreenShot to support larger screenshots. However, the bounds are ignored. The behaviour seems to be that it makes a screenshot of the viewer's default camera or corresponding to its dimensions. Then it stretches the image to fit the given width and height. I expected the function to return a screenshot of the elements inside the given bounding box.
Is the function getScreenShotWithBounds supposed to do something different from what I am assuming?
Example code (LMV 7.40.0):
const bbounds: THREE.Bbox3; // calculated for some elements I want to screenshot
Autodesk.Viewing.ScreenShot.getScreenShotWithBounds(NOP_VIEWER, Math.ceil(bbounds.size().x * 4), Math.ceil(bbounds.size().y* 4), (blob) => window.open(blob), {bounds: bbounds, margin: 0});
Manual screenshot of the viewer
Returned image of getScreenShotWithBounds
Update:
I misunderstood the function. Autodesk.Viewing.ScreenShot.getScreenShotWithBounds just fits the bounds into the camera view. The bounds are not used for any cropping. See my more detailed answer.
The given width and height of Autodesk.Viewing.ScreenShot.getScreenShotWithBounds must be of the same aspect ratio as the viewer (see Adam Nagy's answer), i.e.:
getScreenShotWithBounds(NOP_VIEWER, NOP_VIEWER.getDimensions().width * 4, NOP_VIEWER.getDimensions().height * 4, options);
getScreenShotWithBounds just fits the bounds into the camera view (internally it calls viewer.navigation.fitBounds(true, bounds, false);). The bounds are not used for any kind of cropping / pixel calculation or otherwise.
To get a specific aspect ratio by cropping, you have to provide getCropBounds in the options parameter.
For example to force a aspect ratio by cropping:
getCropBounds: function(viewer: Autodesk.Viewing.Viewer3D, camera: Autodesk.Viewing.UnifiedCamera, bounds: THREE.Box3): THREE.Box2 {
// calculate the crop bounds in pixels
// if the crop bounds are larger the screenshot's width / height is taken
const aspectRatio = new THREE.Vector2(4, 3);
const viewerBoundsWidthRatio = width / aspectRatio.x;
const viewerBoundsHeightRatio = height / aspectRatio.y;
const cropToHeight = viewerBoundsWidthRatio > viewerBoundsHeightRatio;
const smallerScaleRatio = cropToHeight ? viewerBoundsHeightRatio : viewerBoundsWidthRatio;
const cropBoundsSize = aspectRatio.clone().multiplyScalar(smallerScaleRatio);
const cropBounds = new THREE.Box2(new THREE.Vector2(0, 0), new THREE.Vector2(cropBoundsSize.x, cropBoundsSize.y));
const offset = cropToHeight ? new THREE.Vector2((width - cropBoundsSize.x) / 2, 0) : new THREE.Vector2(0, (height - cropBoundsSize.y) / 2);
cropBounds.min.add(offset);
cropBounds.max.add(offset);
return cropBounds;
}
The squashed image you are getting is suggesting it, but it's not clear from the comments in the source code, that you should keep the ratio of the Viewer for the width and height input parameters. So you should only scale them depending on your needs: getScreenShotWithBounds(NOP_VIEWER, viewer_width * x, viewer_height * x, etc
Then the bounds in the options should take care of the cropping.

Thinner html canvas stroke width

I have set the 2d context line width to its apparent minimum 1:
let context = this._canvas.getContext("2d");
context.lineWidth = 1;
context.stroke();
But that is still fairly wide : maybe 20 pixels. Is there any way to get it less?
The key is to ensuring that you've the same number of virtual pixels as you have actual ones. More screen pixels and the image is scaled larger. Fewer screen pixels and the image is shrunk.
Scaling up a 1 pixel wide line produces rectangles, but what happens when we scale down a 1 pixel line? The simple answer is it becomes less intense.
The following example draws 2 canvases the same size. The key difference though, is the size of the pixel buffer they each have. The first has 10,000 pixels - 100x100. The second has just 100 pixels - 10x10. Even though we've just drawn a 1 pixel wide line, you can see the very different appearances of these lines.
window.addEventListener('load', onLoaded, false);
function onLoaded(evt)
{
let can = document.querySelector('canvas');
let ctx = can.getContext('2d');
ctx.moveTo(0,0);
ctx.lineTo(can.width,can.height);
ctx.stroke();
let can2 = document.querySelectorAll('canvas')[1];
let ctx2 = can2.getContext('2d');
ctx2.moveTo(0,0);
ctx2.lineTo(can2.width,can2.height);
ctx2.stroke();
}
canvas
{
width: 100px;
height: 100px;
}
<canvas width=100 height=100></canvas><br>
<canvas width=10 height=10></canvas>

Write text on canvas with background

Is it possible to write image on canvas and write text with background?
For example like this:
How text works in canvas
Unfortunately no, you can't produce text with background with the text methods - only fill or outline the text itself.
This is because the glyphs from the typeface (font) are converted to individual shapes or paths if you want, where the background of it would be the inner part of the glyph itself (the part you see when using fill). There is no layer for the black-box (the rectangle which the glyph fits within) the glyph is using besides from using its geometric position, so we need to provide a sort-of black-box and bearings ourselves.
On the old computer systems most fonts where binary font which where setting or clearing a pixels. Instead of just clearing the background one could opt to provide a background instead. This is not the case with vector based typefaces by default (a browser has direct access to the glyphs geometry and can therefor provide a background this way).
Creating custom background
In order to create a background you would need to draw it first using other means such as shapes or an image.
Examples:
ctx.fillRect(x, y, width, height);
or
ctx.drawImage(image, x, y [, width, height]);
then draw the text on top:
ctx.fillText('My text', x, y);
You can use measureText to find out the width of the text (in the future also the height: ascend + descend) and use that as a basis:
var width = ctx.measureText('My text').width; /// width in pixels
You can wrap all this in a function. The function here is basic but you can expand it with color and background parameters as well as padding etc.
/// expand with color, background etc.
function drawTextBG(ctx, txt, font, x, y) {
/// lets save current state as we make a lot of changes
ctx.save();
/// set font
ctx.font = font;
/// draw text from top - makes life easier at the moment
ctx.textBaseline = 'top';
/// color for background
ctx.fillStyle = '#f50';
/// get width of text
var width = ctx.measureText(txt).width;
/// draw background rect assuming height of font
ctx.fillRect(x, y, width, parseInt(font, 10));
/// text color
ctx.fillStyle = '#000';
/// draw text on top
ctx.fillText(txt, x, y);
/// restore original state
ctx.restore();
}
ONLINE DEMO HERE
Just note that this way of "measuring" height is not accurate. You can measure height of a font by using a temporary div/span element and get the calculated style from that when font and text is set for it.
I simpler solution is to call fillText twice. First a string of Unicode+2588 █ which is a black rectangle repeated the same length as the text using the background color. And then call fillText as normal with the foreground color.
This function gives you vertically and horizontally centered text with a background. It only works well with monospaced fonts (characters with the same width). The function counts the number of character in the string you which to print and multiplies them with 0.62 (assuming that the width of the font is slightly less than 0.62 times the height). The background is 1.5 times bigger than the font size. Change this to fit your needs.
function centeredText(string, fontSize, color) {
var i = string.length;
i = i*fontSize*0.62;
if (i > canvas.width) {
i = canvas.width;
}
ctx.fillStyle = "RGBA(255, 255, 255, 0.8)";
ctx.fillRect(canvas.width / 2 - i / 2,canvas.height / 2 - (fontSize * 1.5) / 2, i, (fontSize * 1.5) );
ctx.font = fontSize.toString() + "px monospace";
ctx.fillStyle = color;
ctx.textBaseline = "middle";
ctx.textAlign = "center";
ctx.fillText(string, canvas.width / 2, canvas.height / 2);
}
So calling the function would look something like this.
centeredText("Hello World", 30, "red");

HTML5 canvas line - how can I let them appear smoother?

I want to make lines but they have sharp edges, e.g. if you use the line to write a word. In Photoshop you can use brushes that are less sharp or you can take a high resolution and zoom out. Is there a nice trick for HTML5 canvas lines, too?
canvas.addEventListener('mousemove', function(e) {
this.style.cursor = 'pointer';
if(this.down) {
with(ctx) {
beginPath();
moveTo(this.X, this.Y);
lineTo(e.pageX , e.pageY );
strokeStyle = red;
ctx.lineWidth=1;
stroke();
}
this.X = e.pageX ;
this.Y = e.pageY ;
}
}, 0);
As you’ve discovered, when you let the user draw a polyline with mousemove you end up with a list of points that draws a very jagged line.
What you need to do is:
Reduce the number of points
Keep the resulting path true to the user’s intended shape.
So you want to go from "before" to "after":
The Ramer-Douglas-Peucker Polygon Simplification Algorithm
You can do this by using the Ramer-Douglas-Peucker (RDP) Polygon Simplification Algorithm. It reduces the “jaggedness” of a polyline while keeping the essence of the intended path.
Here is an overview of how RDP works and what it’s capable of achieving: http://ianqvist.blogspot.com/2010/05/ramer-douglas-peucker-polygon.html
And here is a javascript implimentation of the RDP algorithm thanks to Matthew Taylor: https://gist.github.com/rhyolight/2846020
In Matthew’s implimentation “epsilon” is a number indicating how closely you want to be true to the original “jaggedness”.

How to transition between two images using a grayscale transition map

Imagine you have two images A and B, and a third grayscale image T. A and B contain just about anything, but let's assume they're two scenes from a game.
Now, assume that T contains a diamond gradient. Being grayscale, it goes from black on the outside to white on the inside.
Over time, let's assume 256 not further elaborated on "ticks" to match the grayscales, A should transition into B giving a diamond-wipe effect. If T instead contained a grid of smaller rectangular gradients, it would be like each part of the image by itself did a rectangular wipe.
You might recognize this concept if you've ever worked with RPG Maker or most visual novel engines.
The question ofcourse is how this is done. I know it involves per-pixel blending between A and B, but that's all I got.
For added bonus, what about soft edges?
And now, the conclusion
Final experiment, based on eJames's code
Sample from final experiment -- waves up, 50% http://helmet.kafuka.org/TransitionsSample.png
The grayscale values in the T image represent time offsets. Your wipe effect would work essentially as follows, on a per-pixel basis:
for (timeIndex from 0 to 255)
{
for (each pixel)
{
if (timeIndex < T.valueOf[pixel])
{
compositeImage.colorOf[pixel] = A.colorOf[pixel];
}
else
{
compositeImage.colorOf[pixel] = B.colorOf[pixel];
}
}
}
To illustrate, imagine what happens at several values of timeIndex:
timeIndex == 0 (0%): This is the very start of the transition. At this point, most of the pixels in the composite image will be those of image A, except where the corresponding pixel in T is completely black. In those cases, the composite image pixels will be those of image B.
timeIndex == 63 (25%): At this point, more of the pixels from image B have made it into the composite image. Every pixel at which the value of T is less than 25% white will be taken from image B, and the rest will still be image A.
timeIndex == 255 (100%): At this point, every pixel in T will negate the conditional, so all of the pixels in the composite image will be those of image B.
In order to "smooth out" the transition, you could do the following:
for (timeIndex from 0 to (255 + fadeTime))
{
for (each pixel)
{
blendingRatio = edgeFunction(timeIndex, T.valueOf[pixel], fadeTime);
compositeImage.colorOf[pixel] =
(1.0 - blendingRatio) * A.colorOf[pixel] +
blendingRatio * B.colorOf[pixel];
}
}
The choice of edgeFunction is up to you. This one produces a linear transition from A to B:
float edgeFunction(value, threshold, duration)
{
if (value < threshold) { return 0.0; }
if (value >= (threshold + duration)) { return 1.0; }
// simple linear transition:
return (value - threshold)/duration;
}
I'd say you start with image A, then on every step I you use the pixels of image A for every position where T is smaller than I, and pixels of the image B otherwise.
For a soft edge you might define another parameter d, and calculate you pixels P like so:
For every point (x,y) you decide between the following three options:
I < T(x,y) - d then the point is equal to the point of A
T(x,y) - d <= I < T(x,y) + d then let z = I - (T(x,y) -d) and the point is equal to A(x,y)(1-z/(2d)) + B(x,y)(z/(2d))
I < T(x,y) + d then the point is equal to the point of B
This produces a linear edge, of course you can chose between an arbitrary number of functions for the edge.