three js crushing on chrome android - google-chrome

Im trying to open this link http://3dhd.co.il/mobile/170 in my chrome browser in samsung galaxy s6, and after about 2 minutes the browser crushes and im getting the message "Rats Web GL hit a snag".
I've tried following the chrome instruction (clear cookies etc), tried enabling the flags in chrome://flags.
How can i debug this problem and understand why is it happening,
i developed that scene, and i cant understand what i have done wrong.
Thanks.

Here's Suggestion .
Overlap vertices example :
It means that 3d mesh object is complex . Try to use blender ( free software ) open 3d object (import) click to enter edit mode . Then you can see numbers of vertex point. Best optimization for this example is to make convex optimise .
Also check materials !
See this answer :
Three.js project crashes mobile
Look at :
https://blender.stackexchange.com/questions/6253/how-to-convert-from-high-poly-to-low-poly

Related

WebGL with Viewer - Can't use multiple render targets. Falling back to two passes

How it is reproduced:
I go to the page where the viewer is used. Let's say, while waiting for it to fully load (it doesn't matter), there is no error in the console. Next, I go to another page on the site where the viewer is used.
When I load the page, an error appears in the console(attach a screenshot). After that, if I try to draw the model in the viewer, the viewer does not draw the model completely. And the viewer does not allow you to work with elements (for example, selection).
As a result of what could this error appear?
I didn't do anything special with the initialization of the viewer
Viewer verssion - 7.65
It is definitely possible to run the viewer in multiple browsers/tabs at the same time. Each tab should be using its own GPU context. I just tried loading the following URLs in different browsers at the same time, and all the viewers are rendering the models correctly:
https://forge-basic-app.herokuapp.com/#dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6Zm9yZ2UtYmFzaWMtYXBwLWJ1Y2tldC9yYWNfYWR2YW5jZWRfc2FtcGxlX3Byb2plY3QucnZ0
https://forge-basic-app.herokuapp.com/#dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6Zm9yZ2UtYmFzaWMtYXBwLWJ1Y2tldC9TcG9ydHMlMjBDYXIuZHdmeA
Are you seeing this issue with any Forge app (like the one I linked to), or just with your own one? And have you tried other browsers and other devices? It could also be hardware-related. Perhaps your hardware has fewer resources, and when one of the browser tabs loses context, the viewer cannot recover from that?

Forge Viewer select element by tapping screen not working correctly on Surface Pro using IE 11 (via cordova)

Using Surface pro touch screen to select element in viewer works sometimes other times it seems to translate to a rotate/zoom action. In this case the viewer rotates/moves and the element is NOT selected.
When logging the events there are plenty of mouse down/up events along with mouse moves when it doesnt work. When select does work a single click event occurs.
Double click seems to work ok.
Zoom/rotate etc using standard tools works ok.
Using the keyboard cover touch pad that you can get for the Surface pro to move and click works as expected and the element is selected.
Running same application on a GETAC Windows 10 ruggadised tablet the select element works correctly so it seems related to the Surface Pro.
Unable to change browsers as cordova apps use IE11 on windows and that is currently fixed.
The only solution i can think of for the moment is to remove the standard navigation tools completely (somehow) and recreate a select mode tool that would ignore any mouse moves and use button down event to select element.
Any suggestions on how to fix this?
Tech Details:
Windows 10 Pro,
Surface Pro,
Browser: IE11,
Viewer version 2.11,
Other: WINJS81 cordova application
Thanks for any help
We've had problems with touch events on the Surface Pro in the past. It sounds like the edges of the touch screen are overly sensitive and are triggering extra touch points.
Does the problem happen if you are holding the device up, gripping with one hand, and using your other hand to touch/select a 3D object ?
Could you try doing a selection again, but this time, make sure you other hand is not holding the edge of the screen? (Perhaps place the device on the surface of a desk, so you are not holding it up)
Found a fix to this issue. In viewer3D in the base toolcontroller there is line
var kClickThreshold = 2;
This value is used further down in code to determine if singleClick has occured. It does it by comparing XY on down and up events.
var deltaX = _downX - event.canvasX;
var deltaY = _downY - event.canvasY;
_downX = -1;
_downY = -1;
if( Math.abs(deltaX) <= kClickThreshold && Math.abs(deltaY) <= kClickThreshold )
_this.handleSingleClick( event );
If movement is above this threshold it does not trigger singleClick if below it does.
Did testing and increasing the value to around 5-7 means that the selection of elements worked consistently. (there is still a minor rotate or zoom at same time as select occurs but I assume that this would be another part of viewer that would need adjusting)
Unfortunately it does require edit of viewer code but easy enough. I added code to overwrite standard value if external variable existed.
Would be nice for future viewer dev if more of these types of properties be exposed so that direct edit of code is not required.
Still it is good to have the source code to be able to debug at this level.
At a guess the Surface Pro 4 must have a more sensitive touch system or it could just be related to IE11 as well.

How to select camera in webapp?

How to choose between the front and rear camera in a webapp?
also usefull for: How to choose between multiple microphones?
There is a live example on:
https://webrtc.github.io/samples/src/content/devices/input-output/
(This webrtc-link is new, and should work on Chrome mobile)
Link is from this answer - https://stackoverflow.com/a/35480435/2414207, which is discussing MediaDevices.enumerateDevices()[new] vs MediaStreamTrack.getSources()[deprecated] in depth.
You can find further information (slightly outdated now, but usefull to get the big picture) about this on:
http://www.html5rocks.com/en/tutorials/getusermedia/intro/#toc-gettingstarted
Scroll down and skip:
Feature detection
Gaining access to an input device
Setting media constraints
until
Selecting a media source
For reference: my former live example (broken)
https://simpl.info/getusermedia/sources/
They are using MediaStreamTrack.getSources()[deprecated],this is not working on Chrome 45 and Firefox 39 anymore.
For the new function MediaDevices.enumerateDevices() - see https://stackoverflow.com/a/35480435/2414207

How to build iOS7 Style Audio Recorder App

I am trying to build an audio recorder app similar to iOS7 built in one and looking for guidance on what controls to use for the recording app. I understand I will be using a tableview for the list of previous recordings and a UIView for the top recording view and on tapping record adjust the table view and move down the black recording view.
How should I implement the endless horizontal scrolling view? Should I use a collection view and keep adding elements to the model array as the time increments. Also what should I use for the timer. Is there something like setInterval for Objective C like in Javascript that I can use to keep updating the UI at regular time interval?
If someone also knows of a cocoa pod or sample code that would be greatly appreciated.
For recording the simplest audiorecorder is AVAudioRecorder. Here is a simple implementation of an audio recording app: https://github.com/calmez/Recorder. AVAudioRecorder has simple metering methods where you can read volume output of the channels
Honestly though, Apple would probably use CoreAudio to get the audio because it is more optimized. Novocaine is a good core audio engine that could get you started https://github.com/alexbw/novocaine
For rendering the waveform, I would guess that Apple probably uses OpenGL. I don't see how to do it easily and efficiently otherwise. You could draw them using the standard drawing APIs for UIView like this project does (https://github.com/fulldecent/FDWaveformView) but I don't see this animating well.
For the timer, there is NSTimer

Google Maps IOS SDK 1.2 need snapshot of map view

In 1.1, the GMSScreenshot class provided a rudimentary way to get a snapshot of the entire screen into a UIImage. In 1.2, the class is missing, and in the release notes, it says this:
Calling renderInContext: on the GMSMapView layer now renders correctly;
this allows for snapshots and UI effects
Unfortunately, I'm not finding this to be the case. Typically renderInContext: does not work on Open GL drawing, but I figured I'd take a shot anyway (it didn't work). Has anyone been successful in getting a (preferred) view or screen snapshot?
I am able to take a screenshot. Here is the code I use:
UIGraphicsBeginImageContext(mapView_.frame.size);
[mapView_.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *screenShotImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I do not call this straight after I create the map as it can take some frames for the map to render.