Save & load a texture with alpha component in three.js - json

The following code works perfectly for images that do not contain an alpha channel:
toJSON() {
let output = super.toJSON();
output["geometry"] = this.geometry;
output['imageURL'] = this.mesh.toJSON().images[0]["url"];
return output;
}
fromJSON(data) {
super.fromJSON(data);
this.geometry = data["geometry"];
this.image_path = data["imageURL"];
this.refreshImage();
}
refreshImage() {
const this_obj = this;
const image_texture = new THREE.TextureLoader().load(
//image to load
this.image_path,
//onLoad callback to create the material only once the texture is loaded and its dimensions are available,
//this will ensure aspect ratio is based on the actual texture loaded.
(texture) => {
this_obj.changeGeometry(texture.image.width / texture.image.height)
},
//not required
undefined,
//onError callback
(err) => {
alert("An error occurred while attempting to load image");
}
);
this.mesh.material.map.dispose();
this.mesh.material.dispose();
this.mesh.material = new THREE.MeshPhongMaterial({map: image_texture, side: THREE.DoubleSide,
transparent: true})
this.mesh.material.color.set(this.color);
this.mesh.material.needsUpdate = true;
}
Unfortunately, it does not work for images with alpha channel, because transparent areas are rendered with black opaque color.
Does anyone know why this happens and how best to achieve the desired result?
EDIT:
I got an answer to my question when I realized that the issue is coming from the Mesh.toJSON call. The method is a recursive one that is a real rabbit-hole. But at the bottom of the rabbit-hole you find that texture images are converted to base64 by drawing the image onto an temporary internal canvas. This happens in the ImageUtils.js module inside the getDataURL() function
The issue is that texture images larger than 2048 in width or height are converted into compressed "jpeg" format rather than "png" format that retains the alpha component.
This explains everything.
You can load any image, apply it to a material using TextureLoader, but as soon as you call toJSON to serialize your mesh, the alpha component is lost if the underlying image is larger than 2048 wide or long.
The solution in my case is to write my own function that draws to a canvas and converts the image to base64, but supports larger image sizes. Offcourse one would have to warn the user that it may take some time to perform the conversion.

Here is the texture to url converter that I came up with...stealing heavily from ImageUtils.js and removing error handling code.
function ImageURLfromTexture( image_texture, retain_alpha = true ) {
const image = image_texture.image;
if (image !== undefined) {
if (/^data:/i.test(image.src)) {
return image.src;
}
let _canvas = document.createElementNS('http://www.w3.org/1999/xhtml', 'canvas');
_canvas.width = image.width;
_canvas.height = image.height;
const context = _canvas.getContext('2d');
if (image instanceof ImageData) {
context.putImageData(image, 0, 0);
} else {
context.drawImage(image, 0, 0, image.width, image.height);
}
if ((_canvas.width > 2048 || _canvas.height > 2048) && (!retain_alpha)) {
return _canvas.toDataURL('image/jpeg', 0.6);
} else {
return _canvas.toDataURL('image/png');
}
} else {
return null;
}
}

Related

HTML 5 wait for drawImage to finish in a Vue SPA

I have been trying to debug something for a week and I now suspect the problem is that the drawImage function does not have time to finish. I have a for loop that composes a canvas element by stitching together two different canvas elements and then add that composedCanvas as a frame to a GIF.js object. The problem I keep running into is that the bottom stitched canvas does not appear or partially appears (the picture below started to draw but did not finish) in my output GIF file. My question is how do I ensure synchronous execution of drawImage in the context of a Vue SPA method. I have experimented with Promise, but I have not gotten it to work. Can anyone explain and help me with this, please?
EDIT : I have tried wrapping my drawImage in a promise and await but it raised type errors.
I managed to get it working by properly wrapping the drawImage step in a separate method and inside a promise the proper way. See the code below for an example of two methods that were the culprits but are now fixed.
async composeCanvas( gif , timeStep , visibleLayers , delayInput) {
const mapCnv = this.getMapCanvas();
await this.updateInfoCanvas( timeStep )
const numberVisibleLayers = visibleLayers.length;
const composedCnv = await this.stitchCanvases( mapCnv , numberVisibleLayers );
gif.addFrame(composedCnv, {copy:false, delay: delayInput});
},
async stitchCanvases( mapCanvas , numberVisibleLayers ) {
return new Promise(( resolve ) => {
var composedCnv = document.createElement('canvas');
var ctx = composedCnv.getContext('2d');
var ctx_w = mapCanvas.width;
var ctx_h = mapCanvas.height + ((numberVisibleLayers - 1) * 30) + 40;
composedCnv.width = ctx_w;
composedCnv.height = ctx_h;
[
{
cnv: mapCanvas,
y: 0
},
{
cnv: this.infoCanvas,
y: mapCanvas.height
}
].forEach( ( n ) => {
ctx.beginPath();
ctx.drawImage(n.cnv, 0, n.y, ctx_w, n.cnv.height);
});
resolve(composedCnv)
})
}

HTML/Javascript Webcam Video and Picture with different resolution

I am developing an app in HTML and typescript/javascript (Chrome support only as it is embedded in an electron app at the end) where there is a Webcam (Logitech B910) video streamed with a quite low video resolution (640 x 480) in order to be able to do some post processing (mainly Tensorflow & Mediapipe) without consuming to much resources on the computer.
From time to time, I also need to some image captures with the highest Webcam resolution (1280 x 720 in my case) of the Webcam.
What is the fastest way to take a capture with a higher resolution than the curently video streamed resolution?
The code below is working but the process (mainly the double resolution switch) is very slow. (roughly 6 seconds on my computer)
Here the code detail :
1/ HTML code :
<div #containerRef class="camera-container" *ngIf="count">
<canvas #canvasref class="camera-canvas"></canvas>
<video #videoref playsinline class="camera-video" width="auto" height="auto">
</video>
</div>
2/ Typescript Webcam video setup code :
static async startCamera(deviceId,containerSize,container : HTMLElement,targetFPS : number,_video,_canvas) {
const videoConfig = {
'audio': false,
'video': {
width: { min: 320, ideal: 640 , max:1280},
height: { min: 240, ideal: 480, max:720},
frameRate: {ideal: targetFPS}},
'deviceId' : { exact: deviceId }
}
};
const stream = await navigator.mediaDevices.getUserMedia(videoConfig);
const camera = new Camera(_video,_canvas);
camera.video.srcObject = stream;
await new Promise((resolve) => {
camera.video.onloadedmetadata = () => { resolve(_video); };
});
camera.video.play();
/// Video tag size
camera.video.width = containerSize.width;
camera.video.height = containerSize.height;
/// Canvas tag size
camera.canvas.width = containerSize.width
camera.canvas.height = containerSize.height
//// Container tag size
container.style.width = containerSize.width.toString()+"px"
container.style.height = containerSize.height.toString()+"px"
return camera;
}
3/ The code I use to do the image capture :
async takePhoto(required_width, required_height) {
const track = this.video.srcObject.getVideoTracks()[0];
let constraints :MediaTrackConstraints = track.getConstraints();
/// Save the current values to load it back at the end
const currentWidth = constraints.width;
const currentHeight = constraints.height;
/// Apply the new resolution
constraints.width = 1920;
constraints.height = 780;
await track.applyConstraints(constraints);
/// Do the image capture
const capture = new ImageCapture(track);
const { imageWidth, imageHeight } = await capture.getPhotoCapabilities();
const width = this.setInRange(required_width, imageWidth);
const height = this.setInRange(required_height, imageHeight);
const photoSettings = (width && height) ? {
imageWidth: width,
imageHeight: height
} : null;
const pic = await capture.takePhoto(photoSettings);
/// load back the previously current resolution saved
constraints.width = currentWidth;
constraints.height = currentHeight;
await track.applyConstraints(constraints);
return pic;
}
setInRange(value, range) {
if(!range) return NaN;
let x = Math.min(range.max, Math.max(range.min, value));
x = Math.round(x / range.step) * range.step;
return x;
}
Note : I thought that takePhoto() from Image Capture API is supposed to always use the highest webcam resolution but in my case (and as given by getPhotoCapabilities() result too) it always uses the resolution I applied to setup the camera (in my case 640 x 480) and that's why I need to do this 'dirty'(?) and slow process.
Is there any fastest way to do it ?

Forge viewer - fitToView when running locally and loading local sourced model

I am trying to ensure that when a 3d model is loaded into the viewer it should always orient the model in isometric view and then fit to view.
I have tried the viewer.fitToView(null, null, true) method as well as viewer.fitToView(model) but no success.
This is what I currently have:
var options = {
env : 'Local',
};
var viewer = new Autodesk.Viewing.GuiViewer3D(document.getElementById('ADViewer'));
Autodesk.Viewing.Initializer(options,function() {
if (showDocumentBrowser) {
//file is 2D so load document browser extension
viewer.loadExtension('Autodesk.DocumentBrowser');
// for sheet metal pdf drawings display page 2 first
if(sDisplayFlag == "sm") {
viewer.loadExtension('Autodesk.PDF').then(function() {
// URL parameter `page` will override value passed to loadModel
viewer.loadModel(sFileName, { page: 2 });
});
}
else {
viewer.loadModel(sFileName);
}
}else
{
//file is 3D model. Need to add code here to orient model in isometric view and then fit to view
viewer.loadModel(sFileName);
}
viewer.setTheme('light-theme');
viewer.start(options);
});
There is no difference between local or official mode while using fitToview.
The function declaration of the Viewer3D#fitToview is fitToView(objectIds, model, immediate), so the ways you're using are incorrect.
// Make the camera focus on one object
const selSet = viewer.getSelection();
viewer.fitToView(selSet[0]);
// Make the camera zoom out
viewer.fitToView();

Is possible to reuse DOMString returned from HTMLCanvasElement.toDataURL() or canvas.toBlob()?

I am setting up an image Cropper, it will give me width and height,X and Y of cropped details. Using that I am creating a preview image (using canvas) but is it possible to store that data returned from HTMLCanvasElement.toDataURL() or canvas.toBlob() and reuse in other devices and browsers?
Refer below link (it uses canvas.toBlob() method)
https://codesandbox.io/s/72py4jlll6
You can try use this lines of code:
if (!HTMLCanvasElement.prototype.toBlob) {
Object.defineProperty(HTMLCanvasElement.prototype, 'toBlob', {
value: function (callback, type, quality) {
var binStr = atob( this.toDataURL(type, quality).split(',')[1] ),
len = binStr.length,
arr = new Uint8Array(len);
for (var i=0; i<len; i++ ) {
arr[i] = binStr.charCodeAt(i);
}
callback( new Blob( [arr], {type: type || 'image/png'} ) );
}
});
}
Just change to your code.
Please LMK soon.
Ypu can convert Blob to base64 string,
and save it on server side.
So you can serve this media later to different user/browser.
The way to do it is:
let reader = new FileReader();
reader.readAsDataURL(blob); // converts the blob to base64 and calls onload
reader.onload = function() {
data = reader.result; // data url as base 64 encoded string.
};
Decoding this string will get you the binary PNG data.

How do I properly display a chart in html canvas with Chart.js and Angular Framework

I am following a tutorial to build a real time polling using angular. However, by the end of the tutorial, I am not able to display the chart like the tutorial does. What am I doing wrong?
I followed the tutorial, used angular, pusher, chart.js as instructed. It worked fine until I reached Data Visualization part, where I need to display the polling result on a 'Doughnut' chart. All I have is a white line which is used to divide different color of the dataset, but the rest of the chart is not displaying.
My app.component.ts
voteCount = {
salah: 0,
kane: 0,
eriksen: 0,
kevin: 0,
};
castVote(player) {
this.http
.post(`http://localhost:4000/vote`, { player })
.subscribe((res: any) => {
this.vote = res.player;
this.voted = true;
});
}
getVoteClasses(player) {
return {
elect: this.voted && this.vote === player,
lost: this.voted && this.vote !== player,
};
}
chartLabels: string[] = Object.keys(this.voteCount);
chartData: number[] = Object.values(this.voteCount);
chartType = 'doughnut';
ngOnInit() {
const channel = this.pusher.init();
channel.bind('vote', ({player}) => {
this.voteCount[player] += 1;
this.chartData = Object.values(this.voteCount);
});
}
My app.component.html
<div class="chart-box" *ngIf="voted">
<h2>How others voted</h2>
<canvas
baseChart
[data]="chartData"
[labels]="chartLabels"
[chartType]="chartType"
>
</canvas>
</div>
According to the tutorial, the data visualization should look like this:
However, mine looks like this:
I changed the background color of the div, so you can see there is a white line below Eriksen just like the tutorial, however, the rest of the data is not shown.
Edit: I hard-coded some data and assigned them to chartLabels, chartData and chartType, and now the canvas can visualize them:
chartLabels = ['salah', 'kane', 'eriksen', 'kevin'];
chartData = [120, 150, 180, 90];
chartType = 'doughnut';
So this means
chartLabels: string[] = Object.keys(this.voteCount);
chartData: number[] = Object.values(this.voteCount);
chartType = 'doughnut';
are not receiving any data. Why is that?
Edit 2:
After further experiment, I found that
ngOnInit() {
const channel = this.pusher.init();
channel.bind('vote', ({player}) => {
this.voteCount[player] += 1;
this.chartData = Object.values(this.voteCount);
});
}
Object.values(this.voteCount) is not increased so its value stays at 0, so that's why the chart is empty. How do I increase its value each time I click on the player's pic?