Logarithmic Scale for Javascript Scichart - scichart

I am building an application that requires toggling between Linear and Logarithmic scales. The application is doing realtime socket streaming. Curious if anyone had any recommendations on how best to implement this functionality? I see it exists as part of the WPF library but not currently supported for the Javascript version.

Update for you, Logarithmic Axis is now supported. You can check out an example here: https://www.scichart.com/example/javascript-chart/javascript-chart-logarithmic-axis/
How to add a Logarithmic Axis to a SciChartSurface in JavaScript:
import { SciChartSurface } from "scichart";
import { LogarithmicAxis } from "scichart/Charting/Visuals/Axis/LogarithmicAxis";
import { ENumericFormat } from "scichart/types/NumericFormat";    
// Create a SciChartSurface
const { sciChartSurface, wasmContext } = await SciChartSurface.create(divElementId);
// Create an X Axis
const xAxisLogarithmic = new LogarithmicAxis(wasmContext, {
    logBase: 10,
    labelFormat: ENumericFormat.Scientific,
    labelPrecision: 2,
    minorsPerMajor: 10
});
sciChartSurface.xAxes.add(xAxisLogarithmic); 
Full source code for the demo is here on Github. Documentation is here 

Logarithmic Axis is supported by SciChart WPF, iOS and Android, but not on JavaScript yet.
It is on the roadmap as several users have requested it. Will be delivering an update soon!

Related

Autodesk forge transform extension turn on / off X, Y, Z axes

i'm using the transform extension. Sometimes I want the x, y and z axes to be closed. For example, in this three.js example, pressing the x, y or z keys can be opened and closed via a feature such as .showX. When I try this in the forge extension, there is no such feature. What can I do about it.
I am simply writing such code here. Console log is written, but the axis are still visible.
document.addEventListener('keydown', function (event) {
if (event.key === "x") {
console.log("X press");
_transformControlTx.showX = !_transformControlTx.showX;
}
if (event.key === "y") {
console.log("Y press");
_transformControlTx.showY = !_transformControlTx.showY;
}
if (event.key === "z") {
console.log("Z press");
_transformControlTx.showZ = !_transformControlTx.showZ;
}
});
This question has been discussed on Forge accelerator last week. The discussion is copied here for reference:
Xiaodong:
The transform tool is based on THREE.TransformControls. In the new version of TRHEE.js, it looks it supports handling visibility of the axis, but since Forge Viewer is packaged from old version of three.js, I am afraid it does not support yet. copied my colleagues #Varun Patil - Forge Team, #Petr Broz (Forge Team) (because of time diff, it may take some time they comment).
it seems one more workaround from community: Three.js Transform Controls - How to show only two arrows
Eason:
I compared two TransformControls between three.js example and Forge viewer’s. Unfortunately, they are totally different one. Forge Viewer’s TransformControls is based on three.js r71, so it doesn’t have the direct showing/hiding axises support. You need to do that as the Stack Overflow thread above does.
Petr:
I’m afraid that the transform tool from three.js might be manipulating the scene graph during runtime, adding and removing 3D objects as needed… that would explain why your modifications don’t seem to stick. in general, if you need to modify some component of three.js that is not extensible, instead of fighting with the implementation I’d probably suggest building your own… if all you need is a movement in X or Y, you could just have a 3D object with two thin cylinders, red and green, and handle their mouseover, mousedown, mousemove, and mouseup events

Swaping camera in between video conference using WebRTC

Is it possible to swap the camera in during a video call on mobile?
I can get the available cameras using MediaStreamTrack.getSources().
But I'm able to swap them mid-call.
Any ideas on how to swap during a video call using html and javascript (I'm developing a hybrid app).
Also: is it possible to make Safari (iOS) compatible for webRTC without any plugin?
Any ideas on how to swap during a video call using html and javascript.
Something like this should work in Chrome*:
function switchCamera() {
// get new stream (different camera)
getUserMedia(constraints, function(newStream){
var oldStream = peerConnection.getLocalStreams()[0];
// toggle streams
peerConnection.removeStream(oldStream);
peerConnection.addStream(newStream);
// re-negotiate
peerConnection.createOffer(function (offer) {
peerConnection.setLocalDescription(offer);
// sendOfferToPeers(offer);
});
}, function(e) {
console.log(e);
})
}
function gotOffer(offer) {
peerConnection.setRemoteDescription(offer);
peerConnection.createAnswer(function(answer) {
peerConnection.setLocalDescription(answer);
// sendAnswerToPeers(answer);
});
}
function gotAnswer(answer) {
peerConnection.setRemoteDescription(answer);
}
*you mentioned something about being on mobile so the way you obtain the stream might be different. Also I didn't test this code and the API looks different, have a look here for the correct function calls createAnswer etc.. But the gist of it should be the same:
Acquire new stream
Update peerConnection
Inform peers (re-negotiate)
Also: is it possible to make Safari (iOS) compatible for webRTC without any plugin?
No. You can have a look here for updates.

google maps using three.js and webgl

I have thousands of points that need to be plotted on google maps and got a very responsive maps using the example from https://github.com/ubilabs/google-maps-api-threejs-layer .
Did anyone have a play at this and wondering if it is possible to have different colored markers and possible marker click events?
Appreciate any pointers or examples online.
Millions of clickable data points can be painted on a google map using webgl.
A data point is represented by an x,y pair for a location on the canvas, an int for size, an off screen color, and an on screen color. These values are stored in separate typed arrays.
Each data point has a unique rgb color to act as a key in a lookup table of data point ids.
Create a texture to store the off screen colors and render it to an off screen buffer. On event, load the off screen buffer and use glReadPixels to retrieve the rgb color of the pixel clicked and then find the id in the lookup table. Points on the on screen buffer, what the user sees, can share a common color.
canvas.addEventListener('click', function(ev) {
# insert code to get mouse x,y position on canvas
pixels = new Uint8Array(4);
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
if (colorToId[pixels[0] + " " + pixels[1] + " " + pixels[2]]) {
# Pixel clicked is a data point on the map
}
});
Webgl code is lengthy, so only the click event is included.
Here is a live demo and a repo. (angular and coffeescript)
Here is a second example using plain js: webgl-picking-geo-polygons
Here is react-webgl-leaflet
The solution is based on Brendan Kenny's Google Maps + HTML5 + Spatial Data Visualization which explains some of the code in the excerpt above at the 30 min mark, and Displaying WebGL data on Google Maps.
The demo features less than ten data points, but you can just as easily paint over 16 million pickable data points using all combinations of rgb values.
I discovered OpenLayers this past week. Very, very impressive framework. I would strongly suggest taking a look at it. OpenLayers.org is an open source JavaScript web mapping library distinguished from other alternatives, like Leaflet or Google Maps APIs, because of its huge set of components.
I spent the entire weekend creating sample apps by integrating OpenLayers with API's such as MapBox, WebGL etc... After all was said and done, I was extremely impressed with OpenLayers - and I plan to make use of OpenLayers in an upcoming POC/Project.
Here is a link to my test harness. From there you can also download the code for all of the examples, too.
Updates for 2021!!
Google Maps JS now has a WebglOverlayView class and exposes the WebGL context.
const webglOverlayView = new google.maps.WebglOverlayView();
webglOverlayView.onAdd = () => {
// Do setup that does not require access to rendering context.
}
webglOverlayView.onContextRestored = gl => {
// Do setup that requires access to rendering context before onDraw call.
}
webglOverlayView.onDraw = (gl, coordinateTransformer) => {
// Render objects.
}
webglOverlayView.onContextLost = () => {
// Clean up pre-existing GL state.
}
webglOverlayView.onRemove = () => {
// Remove all intermediate objects.
}
webglOverlayView.setMap(map);
Additionally, #googlemaps/three extends this class for easier use with ThreeJS.
// instantiate a ThreeJS Scene
const scene = new Scene();
// Create a box mesh
const box = new Mesh(
new BoxBufferGeometry(10, 50, 10),
new MeshNormalMaterial(),
);
// set position at center of map
box.position.copy(latLngToVector3(mapOptions.center));
// set position vertically
box.position.setY(25);
// add box mesh to the scene
scene.add(box);
// instantiate the ThreeJS Overlay with the scene and map
new ThreeJSOverlayView({
scene,
map,
});
Here is a link to a jQuery/google map app. Not exactly what you are looking for; however you might find the example useful. Feel free to use - it can be downloaded from my site.
Link to app on my website
Click here to download the zip

How to detect hand gesture in live webcam using javascript?

I embedded live web cam to html page. Now i want to find hand gestures. How to do this using JavaScript, I don't have idea. I Googled lot but didn't get any good idea to complete this. So any one know about this? how to do this.
Accessing the webcam requires the HTML5 WebRTC API which is available in most modern browsers apart from Internet Explorer or iOS.
Hand gesture detection can be done in JavaScript using Haar Cascade Classifiers (ported from OpenCV) with js-objectdetect or HAAR.js.
Example using js-objectdetect in JavaScript/HTML5: Open vs. closed hand detection (the "A" gesture of the american sign language alphabet)
Here is a JavaScript hand-tracking demo -- it relies on HTML5 features which are not yet enabled in all typical browsers, it doesn't work well at all here, and I don't believe it covers gestures, but it might be a start for you: http://code.google.com/p/js-handtracking/
You need to have some motion detecting device (Camera) and you can use kinect to get the motion of different parts of the body. You will have to send data in browser telling about the body parts and position where you can manipulate the data according to your requirement
Here you can find how you can make it. Motion detection and rendering
More about kinect General info
While this is a really old question, theres some new opportunities to do handtracking using fast neural networks and images from a webcam. And in Javascript. I'd recommend the Handtrack.js library which uses Tensorflow.js just for this purpose.
Simple usage example.
<!-- Load the handtrackjs model. -->
<script src="https://cdn.jsdelivr.net/npm/handtrackjs/dist/handtrack.min.js"> </script>
<!-- Replace this with your image. Make sure CORS settings allow reading the image! -->
<img id="img" src="hand.jpg"/>
<canvas id="canvas" class="border"></canvas>
<!-- Place your code in the script tag below. You can also use an external .js file -->
<script>
// Notice there is no 'import' statement. 'handTrack' and 'tf' is
// available on the index-page because of the script tag above.
const img = document.getElementById('img');
const canvas = document.getElementById('canvas');
const context = canvas.getContext('2d');
// Load the model.
handTrack.load().then(model => {
// detect objects in the image.
model.detect(img).then(predictions => {
console.log('Predictions: ', predictions);
});
});
</script>
Demo
Running codepen
Also see a similar neural network implementation in python -
Disclaimer .. I maintain both projects.

HTML5 Canvas: Calculate the Mouseposition after Zooming and Translating

I try to develope an interactive viewer for vector drawings and want to have the feature of zooming.
The function for zooming works pretty good but now I have the problem to calculate the mouseposition for picking objects.
The event gives back the screen coordinates. The canvas doesn't have a methode to use the transformation matrix in the inverse way.
Does anyone have a solution to this problem?
I made a very small a simple class for keeping track of the transformation matrix.
I added an invert() function for reasons like this. I also made an invertPoint() function but didn't put it in the final version. It's not hard to deduce though, just invert and transform point together.
I often just calculate the appropraite transform with this class and then use setTransform, depending on the application.
I wish I could give you a more specific solution but without a code sample of what you want that'd be hard to do.
Here's the transformation class code. And here's a blog post with a bit of an explanation.
Here are some valuable functions for your library that preserve the matrix state and needed to build up a scene graph:
Transform.prototype.reset = function() {
this.m = [1,0,0,1,0,0];
this.stack = [];
};
Transform.prototype.push = function() {
this.stack.push(this.m.slice());
};
Transform.prototype.pop = function() {
this.m = this.stack.pop();
};