How can I make the 3d objects that I load into the map using The WebGL overlay API, to cast shadows on the map tiles and on other loaded objects?
It seems to me that this is not supported yet( or this feature is removed) so is there any workaround?
Preferred WebGL framework: ThreeJs
A workaround with threeJS would be to put a thin box(it will not work with planeGeometry) on the ground that is completely transparent but receives the shadow. this can be achieved by shadowMaterial
Note: this would only show the shadows from meshes added to the webgloverlay, not from the buildings on the map
We do not have this option yet on google map , because overlay of map we get from "webgloverlayview" do not have receive shadow option.
I tried to create my own plane over google map and achieve the shadow
my plane opacity is very light like 0.3. I also added image below in of colorful plane to explain.
++
Please see #yosan_melese answer it prefect to use
ShadowMaterial
here is example code
import { Loader } from '#googlemaps/js-api-loader';
import * as THREE from 'three';
import {FBXLoader} from 'three/examples/jsm/loaders/FBXLoader.js';
const apiOptions = {
apiKey: 'A***********8',
version: "beta",
map_ids: [""]
};
let directionalLight = '';
const mapOptions = {
"tilt": 0,
"heading": 0,
"zoom": 18,
"center": { lat: 33.57404, lng: 73.1637 },
"mapId": "" ,
"mapTypeId": 'roadmap'
}
/*
roadmap: the default map that you usually see.
satellite: satellite view of Google Maps and Google Earth, when available.
hybrid: a mixture of roadmap and satellite view.
terrain: a map based on terrain information, such as mountains and valleys.
*/
async function initMap() {
const mapDiv = document.getElementById("map");
const apiLoader = new Loader(apiOptions);
await apiLoader.load();
return new google.maps.Map(mapDiv, mapOptions);
}
let scene, renderer, camera, loader,loader1,controls;
function initWebglOverlayView(map) {
const webglOverlayView = new google.maps.WebglOverlayView();
webglOverlayView.onAdd = () => {
// set up the scene
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera();
const ambientLight = new THREE.AmbientLight( 0xffffff, 0.75 ); // soft white light
scene.add(ambientLight);
directionalLight = new THREE.DirectionalLight(0xffffff, 1);
directionalLight.position.x += (-90)
directionalLight.position.y += 0
directionalLight.position.z += 20
directionalLight.castShadow = true
const d = 100;
directionalLight.shadow.camera.left = - d;
directionalLight.shadow.camera.right = d;
directionalLight.shadow.camera.top = d;
directionalLight.shadow.camera.bottom = - d;
scene.add(directionalLight);
scene.add( new THREE.CameraHelper( directionalLight.shadow.camera ) );
// FLOOR
/*
const plane = new THREE.Mesh(new THREE.PlaneGeometry(2050, 2200, 300),
new THREE.MeshPhongMaterial({ color: 0xF3F4F5, opacity: 0.3, transparent: true}));
plane.rotation.x = 0;
plane.rotation.y = 0;
plane.rotation.z = 0;
plane.castShadow = true
plane.receiveShadow = true
scene.add(plane);
*/
//after yosan_melese answer i am using ShadowMaterial
const geometry = new THREE.PlaneGeometry( 2000, 2000 );
geometry.rotateX( - Math.PI / 2 );
const material = new THREE.ShadowMaterial();
material.opacity = 0.5;
const plane = new THREE.Mesh( geometry, material );
plane.rotation.x = 2;
plane.position.x = 0;
plane.position.y = 0;
plane.position.z = 0;
plane.receiveShadow = true;
scene.add( plane );
loader = new FBXLoader();
loader.load( 'model/name_model.fbx', function ( object ) {
object.scale.set( 1, 1, 1 );
object.rotation.x = 1.480;
object.rotation.y = 0.950;
object.rotation.z = 0.070;
object.castShadow = true;
object.receiveShadow = true;
object.name = "name_model";
object.traverse( function ( child ) {
if(child.isMesh ) {
child.castShadow = true;
child.receiveShadow = true;
}
});
scene.add( object );
});
}
webglOverlayView.onContextRestored = (gl) => {
// create the three.js renderer, using the
// maps's WebGL rendering context.
renderer = new THREE.WebGLRenderer({
canvas: gl.canvas,
context: gl,
...gl.getContextAttributes(),
});
renderer.autoClear = false;
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFSoftShadowMap;
renderer.receiveShadow = true;
renderer.castShadow = true;
}
webglOverlayView.onDraw = (gl, coordinateTransformer) => {
// update camera matrix to ensure the model is georeferenced correctly on the map
const matrix = coordinateTransformer.fromLatLngAltitude(mapOptions.center, 10);
camera.projectionMatrix = new THREE.Matrix4().fromArray(matrix);
webglOverlayView.requestRedraw();
renderer.render(scene, camera);
// always reset the GL state
renderer.resetState();
}
webglOverlayView.setMap(map);
}
(async () => {
const map = await initMap();
initWebglOverlayView(map);
})();
after #yosan_melese code
Related
I'm trying to build a free hand drawing application using the html canvas element , and so i'm trying to implement the 'Undo' feature which basically takes a snapshot of the current state of the canvas and then saves it on a list of some sort when the user draws something on the canvas, and when the user presses the undo button it pops the last state saved and draws in onto the canvas . I've tried calling context.drawImage() after image.onload as shown but still no use.
export class WhiteboardScreenComponent implements OnInit, AfterViewInit {
#ViewChild('myCanvas')
myCanvas: ElementRef<HTMLCanvasElement>;
painting: boolean = false;
strokeWidth: number = 2;
strokeColor: string = this.colors[this.selectedColor];
lineStyle: CanvasLineCap = 'round';
ctx: CanvasRenderingContext2D;
undoHistory: any[] = [];
constructor() { }
ngOnInit(): void { }
ngAfterViewInit(): void {
this.ctx = this.myCanvas.nativeElement.getContext('2d');
this.resizeCanvas();
this.myCanvas.nativeElement.addEventListener('mousedown', this.startPainting);
this.myCanvas.nativeElement.addEventListener('mouseup', this.finishPainting);
this.myCanvas.nativeElement.addEventListener('mousemove', this.draw);
}
startPainting = (e) => {
this.painting = true;
this.ctx.beginPath();
this.ctx.moveTo(e.clientX, e.clientY);
}
//saving the state of the canvas here
finishPainting = (e) => {
let src = this.myCanvas.nativeElement.toDataURL("image/png");
this.undoHistory.push(src);
this.ctx.closePath();
this.painting = false;
}
draw = (e) => {
if (!this.painting) return;
this.setProperties();
this.ctx.lineTo(e.clientX, e.clientY);
this.ctx.stroke();
}
setProperties(): void {
this.ctx.lineWidth = this.strokeWidth;
this.ctx.lineCap = this.lineStyle;
this.ctx.strokeStyle = this.strokeColor;
}
//fetching the last saved state and drawing it onto the canvas
undo(): void {
if (this.undoHistory.length > 0) {
let image = new Image();
image.onload = () => {
this.ctx.drawImage(image, 0, 0);
};
image.src = this.undoHistory.pop();
}
}
}
Before calling the drawImage() method, you're supposed to clear the screen , that can be accomplished by simply resizing the canvas to re-draw it, or calling clearRect().
I have the following code to export my STL file but it doesn't seems to work.
var loader = new THREE.STLLoader();
loader.load( './Mr_Jaws.stl', function ( geometry ) {
var material = new THREE.MeshPhongMaterial( { color: 0x0692CE, specular: 0x111111, shininess: 100 } );
mesh = new THREE.Mesh( geometry, material );
mesh.scale.set( 1.2, 1.2, 1.2 );
mesh.castShadow = true;
mesh.receiveShadow = true;
mesh.position.set( 0, 0, 0 );
var box = new THREE.BoxHelper( mesh );
scene.add( box );
scene.add( mesh );
var export_stl = new THREE.STLExporter();
console.log(export_stl.parse(mesh));
});
What I get is the following:
solid exported
endsolid exported
Hope you can help me out.
Three.js version
Dev
[X ] r76
...
When you export from Blender to JSON the model is displayed in gray in all browsers. Using (github.com/mrdoob/three.js/tree/master/utils/exporters/blender)
I'm attaching here the export settings, screenshot from the Chrome and JS.
How to solve this problem?
Thank you for your tips!
Screenshot from Chrome
Export settings:
Settings page
JS:
var scene, camera, renderer;
var WIDTH = 300;
var HEIGHT = 300;
var SPEED = 0.03;
function init() {
scene = new THREE.Scene();
initMesh();
initCamera();
initLights();
initRenderer();
document.body.appendChild(renderer.domElement);
}
function initCamera() {
camera = new THREE.PerspectiveCamera(70, WIDTH / HEIGHT, 1, 10);
camera.position.set(0, 3.5, 5);
camera.lookAt(scene.position);
}
function initRenderer() {
renderer = new THREE.WebGLRenderer({ antialias: true, alpha: true });
renderer.setSize(WIDTH, HEIGHT);
}
function initLights() {
var light = new THREE.AmbientLight(0xffffff);
scene.add(light);
}
var mesh = null;
function initMesh() {
var loader = new THREE.JSONLoader();
loader.load('./sp.json', function(geometry, materials) {
mesh = new THREE.Mesh(geometry, new THREE.MeshFaceMaterial(materials));
mesh.scale.x = mesh.scale.y = mesh.scale.z = 0.75;
mesh.translation = THREE.GeometryUtils.center(geometry);
scene.add(mesh);
});
}
function rotateMesh() {
if (!mesh) {
return;
}
mesh.rotation.x -= 0;
mesh.rotation.y -= SPEED;
mesh.rotation.z -= 0;
}
function render() {
requestAnimationFrame(render);
rotateMesh();
renderer.render(scene, camera);
}
init();
render();
If you created your model textures with node editor then is possible you don't see textures, how about UV unwrap ?
I'm trying to treat layers as pages -- i.e. I draw on one page, then turn the page and draw on another, each time storing the previous page in case the user goes back to it.
In my mind this translates as:
Create current_layer global pointer.
Each time newPage() is called, store the old layer in an array, and overwrite the pointer
layer_array.push(current_layer); //store old layer
current_layer = new Kinetic.Layer(); //overwrite with a new
New objects are then added to the current_layer which binds them to the layer, whether they are drawn or not. (e.g. current_layer.add(myCircle) )
Retrieving a page is simply updating the pointer to the requesting layer in the array, and redrawing the page. All the child nodes attached to the layer will also be drawn too
current_layer = layer_array[num-1]; //num is Page 2 e.g
current_layer.draw()
However nothing is happening! I can create new pages, and store them appropriately - but I cannot retrieve them again...
Here's my full code (my browser is having problems using jsfiddle):
<html>
<head>
<script src="http://d3lp1msu2r81bx.cloudfront.net/kjs/js/lib/kinetic-v4.3.0.min.js"></script>
<script>
//Global
var stage; //canvas
var layer_array = [];
var current_page; //pointer to current layer
window.onload = function() {
stage = new Kinetic.Stage({
container: 'container',
width: 400,
height: 400
});
//Add initial page to stage to draw on
newPage()
};
//--- Functions ----//
function newPage(){
if(!current_page){
console.log("current page undefined");
} else {
layer_array.push(current_page);
// stage.remove(current_page);
//Nope, not working.
stage.removeChildren();
//Works, but I think it unbinds all objects
// from their specific layers...
// stage.draw()
console.log("Stored layer and removed it from stage");
}
current_page = new Kinetic.Layer();
console.log("Currently on page:"+(layer_array.length+1));
stage.add(current_page);
stage.draw();
}
function gotoPage(num){
stage.removeChildren()
stage.draw()
num = num-1;
if(num >= 0) {
current_page = layer_array[num];
console.log("Now on page"+(num+1));
stage.add(current_page);
stage.draw();
}
}
function addCircletoCurrentPage()
{
var rand = Math.floor(3+(Math.random()*10));
var obj = new Kinetic.Circle({
x: rand*16, y: rand*16,
radius: rand,
fill: 'red'
})
var imagelayer = current_page;
imagelayer.add(obj);
imagelayer.draw();
}
</script>
</head>
<body>
<div id="container"></div>
<button onclick="addCircletoCurrentPage()" >click</button>
<button onclick="newPage()" >new</button>
<button onclick="gotoPage(1)" >page1</button>
<button onclick="gotoPage(2)" >page2</button>
<button onclick="gotoPage(3)" >page3</button>
</body>
</html>
This was a fun problem. I think this fixes your troubles: http://jsfiddle.net/LRNHk/3/
Basically, you shouldn't remove() or removeChildren() as you risk de-referencing them.
Instead you should use:
layer.hide(); and layer.show();
this way, you keep all things equal and you get speedy draw performance.
So your go to page function should be like this:
function gotoPage(num){
for(var i=0; i<layer_array.length; i++) {
layer_array[i].hide();
}
layer_array[num].show();
console.log("Currently on page:"+(num));
console.log("Current layer: " + layer_array[num].getName());
stage.draw();
}
I also modified your other functions, which you can see in the jsfiddle.
Okay I changed my approach and instead of swapping layers (100x easier and makes more sense), I instead opted for serializing the entire stage and loading it back.
It works, but it really shouldn't have to be like this dammit
//Global
var stage; //canvas
var layer_array = [];
var current_page; //pointer to current layer
var page_num = 0;
window.onload = function() {
stage = new Kinetic.Stage({
container: 'container',
width: 400,
height: 400
});
//Add initial page to stage to draw on
newPage()
};
//--- Functions ----//
function newPage(){
if(!current_page){
console.log("current page undefined");
} else {
savePage(page_num)
stage.removeChildren()
console.log("Stored layer and removed it from stage");
}
current_page = new Kinetic.Layer();
console.log("Currently on page:"+(layer_array.length+1));
stage.add(current_page);
stage.draw();
page_num ++;
}
function savePage(num){
if( (num-1) >=0){
var store = stage.toJSON();
layer_array[num-1] = store;
console.log("Stored page:"+num)
}
}
function gotoPage(num){
savePage(page_num);
stage.removeChildren()
if(num-1 >= 0) {
var load = layer_array[num-1];
document.getElementById('container').innerHTML = ""; //blank
stage = Kinetic.Node.create(load, 'container');
var images = stage.get(".image");
for(i=0;i<images.length;i++)
{
//function to induce scope
(function() {
var image = images[i];
var imageObj = new Image();
imageObj.onload = function() {
image.setImage(imageObj);
current_page.draw();
};
imageObj.src = image.attrs.src;
})();
}
stage.draw();
page_num =num //update page
}
}
function addCircletoCurrentPage()
{
var rand = Math.floor(3+(Math.random()*10));
var obj = new Kinetic.Circle({
x: rand*16, y: rand*16, name: "image",
radius: rand,
fill: 'red'
})
var imagelayer = current_page;
imagelayer.add(obj);
imagelayer.draw();
}
// set the scene size
var WIDTH = 800;
var HEIGHT = 640;
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = WIDTH / HEIGHT,
NEAR = 0.1,
FAR = 10000;
var $container = $('#container');
// create a WebGL renderer, camera
var renderer = new THREE.WebGLRenderer();
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE,
ASPECT,
NEAR,
FAR);
var scene = new THREE.Scene();
// the camera starts at 0,0,0 so pull it back
camera.position.z = 1000;
// start the renderer
renderer.setSize(WIDTH, HEIGHT);
// attach the render-supplied DOM element
$container.append(renderer.domElement);
// create the sphere's material
var sphereMaterial = new THREE.MeshLambertMaterial({
color: 0xCC0000
});
// set up the sphere vars
var radius = 50, segments = 16, rings = 16;
// create a new mesh with sphere geometry -
// we will cover the sphereMaterial next!
var sphere = new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, rings),
sphereMaterial);
sphere.position.x = 20;
sphere.doubleSided = true;
// add the sphere to the scene
scene.add(sphere);
// and the camera
scene.add(camera);
/* Light it up */
var pointLight = new THREE.PointLight(0xFFFFFF);
pointLight.position.x = 10;
pointLight.position.y = 50;
pointLight.position.z = 130;
scene.add(pointLight);
window.setInterval(update, 16.66666666);
var keyboard = new THREEx.KeyboardState();
function update() {
if (keyboard.pressed("W")) {
camera.position.z -= 5;
} else if (keyboard.pressed("S")) {
camera.position.z += 5;
}
renderer.render(scene, camera);
}
That is my code, a simple demow here using W and S keys makes me go zoom in/out a 3D Sphere.
However, when I get "inside" the sphere, I can't see it's back face, even though I put sphere.doubleSided = true;.
I tried putting renderer.setFaceCulling(true); (and false too), but none of them fixed it.
Any ideas?
use side: THREE.DoubleSide in material
which version of three.js are you using ?
jsfiddle