In my case, I have color coded some tiles in the HTML client and I want to add a simple color code key. I have the PNG file I want to use.
I do not require the ability to upload or change the image.
This link seems to achieve what I am looking for, but I am not sure where to implement. Does all of this code go into the PostRender of the Image Control I created?
Add image to lightswitch using local property and file location
Here is what the PostRender of the simple Image data item I created as an Image Local Property, and then dragged into the Solution Designer. It was basically copied from the link above, but I did change the name of the image file to match mine, and I have already added the item to the Content\Images folder structure and it shows in the file view:
myapp.BrowseOrderLines.ColorKey_postRender = function (element, contentItem) {
// Write code here.
function GetImageProperty(operation) {
var image = new Image();
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
// XMLHttpRequest used to allow external cross-domain resources to be processed using the canvas.
// unlike crossOrigin = "Anonymous", XMLHttpRequest works in IE10 (important for LightSwitch)
// still requires the appropriate server-side ACAO header (see https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image)
var xhr = new XMLHttpRequest();
xhr.onload = function () {
var url = URL.createObjectURL(this.response);
image.onload = function () {
URL.revokeObjectURL(url);
canvas.width = image.width;
canvas.height = image.height;
ctx.drawImage(image, 0, 0);
var dataURL = canvas.toDataURL("image/png");
operation.complete(dataURL.substring(dataURL.indexOf(",") + 1));
};
image.src = url;
};
xhr.open('GET', this.imageSource, true);
xhr.responseType = 'blob';
xhr.send();
};
myapp.BrowseOrderLines.ColorKey_postRender = function (element, contentItem) {
// Write code here.
msls.promiseOperation
(
$.proxy(
GetImageProperty,
{ imageSource: "content/images/Key.png" }
)
).then(
function (result) {
contentItem.screen.ImageProperty = result;
}
);
};
}
Currently, the Image control does show on the screen in the browser, and is the custom size I choose, but it is just a light blue area that does not display my image file.
I am not sure if I have embedded the image? I am not sure if that is a missing step?
Thank you!!
The easiest method of testing this approach would be to change your postRender to the following (which embeds the helper function within the postRender function):
myapp.BrowseOrderLines.ColorKey_postRender = function (element, contentItem) {
function GetImageProperty(imageSource) {
return msls.promiseOperation(
function (operation) {
var image = new Image();
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
// XMLHttpRequest used to allow external cross-domain resources to be processed using the canvas.
// unlike crossOrigin = "Anonymous", XMLHttpRequest works in IE10 (important for LightSwitch)
// still requires the appropriate server-side ACAO header (see https://developer.mozilla.org/en-US/docs/Web/HTML/CORS_enabled_image)
var xhr = new XMLHttpRequest();
xhr.onload = function () {
var url = URL.createObjectURL(this.response);
image.onload = function () {
URL.revokeObjectURL(url);
canvas.width = image.width;
canvas.height = image.height;
ctx.drawImage(image, 0, 0);
var dataURL = canvas.toDataURL("image/png");
operation.complete(dataURL.substring(dataURL.indexOf(",") + 1));
};
image.onerror = function () {
operation.error("Image load error");
};
image.src = url;
};
xhr.open('GET', imageSource, true);
xhr.responseType = 'blob';
xhr.send();
}
);
};
GetImageProperty("content/images/Key.png").then(function onComplete(result) {
contentItem.screen.ImageProperty = result;
}, function onError(error) {
msls.showMessageBox(error);
});
};
This assumes that you named the local property ImageProperty as per my original post and the Image control on your screen is named ColorKey.
In the above example, I've also taken the opportunity to slightly simplify and improve the code from my original post. It also includes a simple error handler which may flag up if there is a problem with loading the image.
If this still doesn't work, you could test the process by changing the image source file-name to content/images/user-splash-screen.png (this png file should have been added as a matter of course when you created your LightSwitch HTML project).
As the GetImageProperty function is a helper routine, rather than embedding it within the postRender you'd normally place it within a JavaScript helper module. This will allow it to be easily reused without repeating the function's code. If you don't already have such a library and you're interested in implementing one, the following post covers some of details involved in doing this:
Lightswitch HTML global JS file to pass variable
Related
Good day,
I am using the latest Autodesk forge viewer and I am trying to take a screenshot that also renders my markups. Right now my code takes a screenshot without any markups. Below is my viewer code. I am loading markups Core and markups Gui extensions. Notice the "takeSnapshot(viewer)" function inside onDocumentLoadSuccess(viewerDocument). The function is defined right before the initializer function.
function takeSnapshot(target){
$('#snipViewer').click( () => {
target.getScreenShot(1600, 920, (blobURL) => {
let snip = blobURL;
$('#sniplink').attr("href", snip);
$('#sniplink').html('Not Empty');
$('#sniplink').css({"background-image": `url(${blobURL})`});
});
});
}
//Autodesk Viewer Code
instance.data.showViewer = function showViewer(viewerAccessToken, viewerUrn){
localStorage.setItem("viewerAccessTokentoken", viewerAccessToken);
localStorage.setItem("viewerUrn", viewerUrn);
var viewer;
var options = {
env: 'AutodeskProduction',
api: 'derivativeV2',
getAccessToken: function(onTokenReady) {
var token = viewerAccessToken;
var timeInSeconds = 3600;
onTokenReady(token, timeInSeconds);
}
};
Autodesk.Viewing.Initializer(options, function() {
let htmlDiv = document.getElementById('forgeViewer');
viewer = new Autodesk.Viewing.GuiViewer3D(htmlDiv);
let startedCode = viewer.start();
viewer.setTheme("light-theme");
viewer.loadExtension("Autodesk.CustomDocumentBrowser").then(() => {
viewer.loadExtension("Autodesk.Viewing.MarkupsCore");
viewer.loadExtension("Autodesk.Viewing.MarkupsGui");
});
if (startedCode > 0) {
console.error('Failed to create a Viewer: WebGL not supported.');
$("#loadingStatus").html("Failed to create a Viewer: WebGL not supported.");
return;
}
console.log('Initialization complete, loading a model next...');
});
var documentId = `urn:` + viewerUrn;
var derivativeId = `urn:` + instance.derivativeUrn;
Autodesk.Viewing.Document.load(documentId, onDocumentLoadSuccess, onDocumentLoadFailure);
function onDocumentLoadSuccess(viewerDocument) {
var defaultModel = viewerDocument.getRoot().getDefaultGeometry();
viewer.loadDocumentNode(viewerDocument, defaultModel);
takeSnapshot(viewer);
}
function onDocumentLoadFailure() {
console.error('Failed fetching Forge manifest');
$("#loadingStatus").html("Failed fetching Forge manifest.");
}
}
I have already read this article: https://forge.autodesk.com/blog/screenshot-markups
I have tried doing this method but the instructions are very unclear for me. <div style="width:49vw; height:100vh;display:inline-block;"><canvas id="snapshot" style="position:absolute;"></canvas><button onclick="snaphot();" style="position:absolute;">Snapshot!</button></div>
What is the canvas element here for? Am I supposed to renderToCanvas() when I load the markups extension inside the initialize function or in my screenshot function? Is there some way I can implement the renderToCanvas() without changing too much of what I already am using here? I am not an expert with the viewer API so please if you could help me it would be very much appreciated, I am a beginner please don't skip many steps.
Thank you very much!
Here's a bit more simplified logic for generating screenshots with markups in Forge Viewer, with a bit more explanation on why it needs to be done this way below:
function getViewerScreenshot(viewer) {
return new Promise(function (resolve, reject) {
const screenshot = new Image();
screenshot.onload = () => resolve(screenshot);
screenshot.onerror = err => reject(err);
viewer.getScreenShot(viewer.container.clientWidth, viewer.container.clientHeight, function (blobURL) {
screenshot.src = blobURL;
});
});
}
function addMarkupsToScreenshot(viewer, screenshot) {
return new Promise(function (resolve, reject) {
const markupCoreExt = viewer.getExtension('Autodesk.Viewing.MarkupsCore');
const canvas = document.createElement('canvas');
canvas.width = viewer.container.clientWidth;
canvas.height = viewer.container.clientHeight;
const context = canvas.getContext('2d');
context.clearRect(0, 0, canvas.width, canvas.height);
context.drawImage(screenshot, 0, 0, canvas.width, canvas.height);
markupCoreExt.renderToCanvas(context, function () {
resolve(canvas);
});
});
}
const screenshot = await getViewerScreenshot(viewer);
const canvas = await addMarkupsToScreenshot(viewer, screenshot);
const link = document.createElement('a');
link.href = canvas.toDataURL();
link.download = 'screenshot.png';
link.click();
Basically, the markups extension can only render its markups (and not the underlying 2D/3D scene) into an existing <canvas> element. That's why this is a multi-step process:
You render the underlying 2D/3D scene using viewer.getScreenShot, getting a blob URL that contains the screenshot image data
You create a new <canvas> element
You insert the screenshot into the canvas (in this case we create a new Image instance and render it into the canvas using context.drawImage)
You call the extension's renderToCanvas that will render the markups in the canvas on top of the screenshot image
I have music play example http://www.smartjava.org/examples/webaudio/example3.html
And i need to show html5 audio player (with controls) for this song. How i can do it?
Javascript code from example below:
// create the audio context (chrome only for now)
// create the audio context (chrome only for now)
if (! window.AudioContext) {
if (! window.webkitAudioContext) {
alert('no audiocontext found');
}
window.AudioContext = window.webkitAudioContext;
}
var context = new AudioContext();
var audioBuffer;
var sourceNode;
var analyser;
var javascriptNode;
// get the context from the canvas to draw on
var ctx = $("#canvas").get()[0].getContext("2d");
// create a gradient for the fill. Note the strange
// offset, since the gradient is calculated based on
// the canvas, not the specific element we draw
var gradient = ctx.createLinearGradient(0,0,0,300);
gradient.addColorStop(1,'#000000');
gradient.addColorStop(0.75,'#ff0000');
gradient.addColorStop(0.25,'#ffff00');
gradient.addColorStop(0,'#ffffff');
// load the sound
setupAudioNodes();
loadSound("http://www.audiotreasure.com/mp3/Bengali/04_john/04_john_04.mp3");
function setupAudioNodes() {
// setup a javascript node
javascriptNode = context.createScriptProcessor(2048, 1, 1);
// connect to destination, else it isn't called
javascriptNode.connect(context.destination);
// setup a analyzer
analyser = context.createAnalyser();
analyser.smoothingTimeConstant = 0.3;
analyser.fftSize = 512;
// create a buffer source node
sourceNode = context.createBufferSource();
sourceNode.connect(analyser);
analyser.connect(javascriptNode);
sourceNode.connect(context.destination);
}
// load the specified sound
function loadSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// When loaded decode the data
request.onload = function() {
// decode the data
context.decodeAudioData(request.response, function(buffer) {
// when the audio is decoded play the sound
playSound(buffer);
}, onError);
}
request.send();
}
function playSound(buffer) {
sourceNode.buffer = buffer;
sourceNode.start(0);
}
// log if an error occurs
function onError(e) {
console.log(e);
}
// when the javascript node is called
// we use information from the analyzer node
// to draw the volume
javascriptNode.onaudioprocess = function() {
// get the average for the first channel
var array = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(array);
// clear the current state
ctx.clearRect(0, 0, 1000, 325);
// set the fill style
ctx.fillStyle=gradient;
drawSpectrum(array);
}
function drawSpectrum(array) {
for ( var i = 0; i < (array.length); i++ ){
var value = array[i];
ctx.fillRect(i*5,325-value,3,325);
// console.log([i,value])
}
};
I think what you want is to use an audio tag for your source and use createMediaElementSource to pass the audio to webaudio for visualization.
Beware that createMediaElementSource checks for CORS access so you must have appropriate cross-origin access for this to work. (It looks like your audio source doesn't return the appropriate access headers for this to work.)
I am building an EaselJS app, and I want the user to be able to select an image from their hard drive to be the background for the app. Once they are done interacting with the app and choose to submit I want to upload the image, but not before then. Is there a way I can do this? I am using Rails if that makes any difference.
You can use the method readAsDataURL from Filereader,then pass the result to an Image object and finally pass that image as parameter of a new easeljs.Bitmap object.
function previewFile(){
var file = input.files[0];
var reader = new FileReader();
reader.onloadend = function () {
image.src = reader.result;
var bitmap = new createjs.Bitmap(image);
stage.addChild(bitmap);
}
if (file) {
reader.readAsDataURL(file);
} else {
alert("fail");
}
}
I created a fiddle that does something like that:
https://jsfiddle.net/vampaynani/b5maj7nd/
I would like to change the pitch of a sound file using the HTML 5 Audio node.
I had a suggestion to use the setVelocity property and I have found this is a function of the Panner Node
I have the following code in which I have tried changing the call parameters, but with no discernible result.
Does anyone have any ideas, please?
I have the folowing code:
var gAudioContext = new AudioContext()
var gAudioBuffer;
var playAudioFile = function (gAudioBuffer) {
var panner = gAudioContext.createPanner();
gAudioContext.listener.dopplerFactor = 1000
source.connect(panner);
panner.setVelocity(0,2000,0);
panner.connect(gainNode);
gainNode.connect(gAudioContext.destination);
gainNode.gain.value = 0.5
source.start(0); // Play sound
};
var loadAudioFile = (function (url) {
var request = new XMLHttpRequest();
request.open('get', 'Sounds/English.wav', true);
request.responseType = 'arraybuffer';
request.onload = function () {
gAudioContext.decodeAudioData(request.response,
function(incomingBuffer) {
playAudioFile(incomingBuffer);
}
);
};
request.send();
}());
I'm trying to achieve something similar and I also failed at using PannerNode.setVelocity().
One working technique I found is by using the following package (example included in the README): https://www.npmjs.com/package/soundbank-pitch-shift
It is also possible with a biquad filter, available natively. See an example here: http://codepen.io/qur2/pen/emVQwW
I didn't find a simple sound to make it obvious (CORS restriction with ajax loading).
You can read more at http://chimera.labs.oreilly.com/books/1234000001552/ch04.html and https://developer.mozilla.org/en-US/docs/Web/API/BiquadFilterNode.
Hope this helps!
I'm using Dart to develop a personal whiteboard Chrome app and it is sometimes useful to be able to quickly copy and paste an image (e.g. a slide from a presentation, a diagram or a handout) so that I can add notes over the image while teaching a class or giving a presentation.
How can I paste an image stored on the clipboard onto a canvas element in Dart?
Actually, this answer to the same question for JS is almost directly applicable. A Dart translation might look something like:
import 'dart:html';
void main() {
var can = new CanvasElement()
..width = 600
..height = 600
;
var con = can.getContext('2d');
document.onPaste.listen((e) {
var blob = e.clipboardData.items[0].getAsFile();
var reader = new FileReader();
reader.onLoad.listen((e) {
var img = new ImageElement()
..src = (e.target as FileReader).result;
img.onLoad.listen((_) {
con.drawImage(img, 0, 0);
});
});
reader.readAsDataUrl(blob);
});
document.body.children.add(can);
}