Using HTML 5 into Html Web Resource in Crm 2011 - html

Hy everyone, how are you?.
Well. The case is that I need to develop a web resource that will be embeded on Form's header.
This Web Resource should draw a rectanble with a label for each value of a pick list that is being showed on the form.
I tryed to use HTML 5 in order to draw the rectangles but I can't make it work properly once is included on the web resource.
I'll paste here an excample that, if it's opened normally using IE, runs normally, but opened through a Web resource embeded into the form's header down't work and throws me an exception like : ' getContext(() functions is not defined'
Here is the code
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
var xpos = -50;
debugger;
for (var ii = 0; ii < 3; ii++) {
xpos += 50;
ctx.fillStyle = "#FF0000";
ctx.fillRect(xpos, 1, 50, 50);
ctx.fillStyle = "#000000";
ctx.font = "10px Arial";
ctx.fillText("Stage ", xpos+5, 25);
}
Question: Can I draw in crm using HTML 5 into a web resource????
Thanks in advance!

I'm sorry to say that but the answer marked as the correct one it's not actually correct cause doesn't answer the question and also got me into confusion because of an assumption.
Nobody's fault, however lets say things straight for the other who will need the right answer to this question.
The error you received sounds like this: "Object doesn't support property or method 'getContext'" and referrs to HTML5 canvas.getContext() object.You got this error because the built-in object for the canvas element, the getContext() one, works only in IE9 (first IE browser version compatible with HTML5) and you runed your CRM 2011 instance in an older version of IE.
Seeing this message on my computer I thought this is an error cause I was running my CRM 2011 instance in IE9 browser version but when I runned the F12 developer tools in I saw that the Browser Mode was IE9 but the Document Mode was IE8 standards wich is set as the default one for the CRM 2011 instance. So, I tried to change it to IE9 standards but surprise - CRM 2011 has now javascript errors and crushes. Looks like CRM 2011 doesn't actualy runs in IE9 standards but in IE8 standards even if the browser version is IE9 (bropably this was your case too).
So, for the moment, using HTML 5 elements into Html Web Resource in Crm 2011 is not yet possible.
The only ideea I have in mind is that for the moment we can only create stand alone appplications that can contain HTML5 elements and connects to our CRM data till MS figures out a way for running CRM 2011 in IE9 standards.
If somebody knows more than those informations, please let us know.
Sry for the possible typos!

Looks like you are accessing related records and have to add ClientGlobalContext.js in your web resource. This dynamic JS file gives reference to the global context (a connection to CRM objects) to let you query data.
For a project that I work on, I refer to the js like the following.
<script type="text/javascript" src="http://{SERVERNAME}[:PORT]/{ORGNAME}/WebResources/ClientGlobalContext.js.aspx"></script>
Also, on a second thought, if you are accessing values from the form itself, you are not querying any other records, you should not need this.
Regarding HTML5 (I love it!), it's just a browser thing, if it works outside of CRM, pretty good chances, it will work within CRM too!

Related

Rendering with Chromium direct to my own 'canvas' (e.g. GDI+)

First, a quick description of the end-goal:
I'm building a cross-platform, .NET Core-based, printing app. This app will be able to print all sorts of file types with custom page settings, such as headers, footers, and margins. A key feature is it supports multiple-pages-up (e.g. a landscape sheet of paper with two portrait pages rendered side by side...called "2up").
Printing HTML is important not just because of printing HTML, but I want to use all the great HTML-based syntax-highlighting out there for source code (e.g. www.prismjs.com).
The app is basically done but for one major problem: I can't get the HTML to render well enough. So far I've implemented source code printing three ways:
1) As plain text with my own line-numbering and line-wrap engine. This works wonderfully for everything I can throw at it, but it does not support syntax highlighting.
2) Using Html-Renderer (https://github.com/ArthurHub/HTML-Renderer/issues) an OSS .NET-based Html Renderer. This implementation is the weakest because Html-Renderer's CSS support is really weak. It can't handle hardly anything prismjs or highlightjs' generate.
3) Usinglitehtml' (www.litehtml.com) via LiteHtmlSharp. This was very promising and I almost have it working with some major hacks, but litehtml also does not support key, modern, HTML/CSS features.
Neither Html-Renderer or litehtml support the CSS page-break-before feature that, when combined with media print would let me ensure lines are not split between pages.
What I really want to use is the Chromium rendering engine. litehtml provides a fantastic API for this sort of problem: It calls me whenever it needs to render, and I draw (text, table borders, images, etc...) using GDI+. My dream is to find something in Chromium (CEF, Puppeteer, ???) that provides a similar API.
Or, an alternative, an API that will let me pass in a GDI+ Graphics (or HDC) and the renderer will render to that surface.
With Html-Renderer I measure calculate # pages like this.
SizeF size = HtmlRender.MeasureGdiPlus(g, html, containingSheet.GetPageWidth());
int numPages = (int)(size.Height / containingSheet.GetPageHeight());
My page rendering code (e.g. OnPaint) looks like this:
SizeF size = new SizeF(containingSheet.GetPageWidth(), containingSheet.GetPageHeight());
HtmlRender.RenderGdiPlus(g, html, PointF(0, 0), size );
With htmllite the OnPaint code looks like this:
// Set the clip such that any extraLines are clipped off bottom
g.SetClip(new Rectangle(0, 0, (int)Math.Round(PageSize.Width), (int)Math.Round(PageSize.Height - remainingPartialLineHeight)));
LiteHtmlSize size = new LiteHtmlSize(Math.Round(PageSize.Width), Math.Ceiling(PageSize.Height));
litehtml.Document.Draw((int)-0, (int)-yPos, new position {
x = 0,
y = 0,
width = (int)size.Width,
height = (int)size.Height
});
And in this case the call to litehtml.Document.Draw causes a bunch of callbacks into my app that I process using the same Graphics the OnPaint is called with.
Most discussions of CEF and Chromium point to ScreenshotAsync etc... which will not do because I need to be rendering to a PRINTERS HDC (or GDI+) and blitting bitmaps will loose quality.
I have poured over the Chromium source and I cannot find the obvious way to say to CEF/Chromium "render page 1 (defined as Height/Width) to this GDI+ Graphics object" then "render page 2..." etc... The printing support (and how pdfium is integrated come close!).
Chromium issue 311308 indicates I'm hosed until this work gets picked up again.
Note: I have full access to nodejs w/in my app. I have built a dotnet/nodejs bridge, which is how I convert the raw text file of a source code file to richly formatted, line-numbered, syntax-highlighted html via prismjs. This means I could easily use puppeteer/Headless Chrome if I could just figure out the right APIs.
Does anyone have a suggestion that might help? I'm willing to contribute to Chromium if it's not major heart surgery.

mxClient renders different shapes all as squares

Several weeks ago I have been asked to upgrade a web application based on a very old version of MXGraph library (version 2.4). The application integrated also the 'grapheditor' a sort of demo application evolved later in Diagramly
and then in Draw.io). Recently I completed the more problematic step, the transition from old "grapheditor" to Draw.io, so I am now able to open all the previous diagrams (saved as plain XML), modify and save them consistently.
Ok, this is the nice part. The bad side is the 'read-only' section of the application ,where the users can more or less, only view the graph.
This page is based on the mxClient.js that renders the graph described in the xml through this code:
var graph = new mxGraph(container);
var diagram = mxUtils.parseXml(xml);
var codec = new mxCodec(diagram);
codec.decode(diagram.documentElement, graph.getModel());
graph.fit();
Upgrading the MX library to the last version (3.9.10) the same code works but some shapes are not rendered properly, they appears as squares instead of
circles, ellipses, etc. The two following images are an example of this misbehavior
Graph in the draw.io:
Same graph rendered by mxClient:
After some tries I discovered that the old mxClient is able to render the same graph perfectly (as draw.io does) so I think there have to be something wrong (or missing) in my code or mxGraph installation/configuration.
As a temporary workaround I can keep in place the old version of mxGraph but obviously I'd like to use the new one.
Can someone give me an hint on this? Any help would be very appreciated.
The tape shape isn't part of core mxGraph, it's part of the GraphEditor example, in the additional shapes JavaScript.
If you look at the style of the ellipse, it's probably not the one in the core, most likely another one from Shapes.js.
Either pull in shapes.js, or use the viewer in draw.io.

speechSynthesis.speak not working in chrome

I'm using chrome Version 55.0.2883.87 m (64-bit) on Windows 10.
The following simple html file reproduces the problem and is extracted from my more complex app. It is supposed to speak the 3 words on page load. It works on MS Edge and Firefox but does not work on chrome. This code was working for me on Chrome no problem a couple weeks back.
<html>
<head>
<script lang="javascript">
window.speechSynthesis.speak(new SpeechSynthesisUtterance("cat"));
window.speechSynthesis.speak(new SpeechSynthesisUtterance("dog"));
window.speechSynthesis.speak(new SpeechSynthesisUtterance("bark"));
</script>
</head>
<body></body>
</html>
I may never know for sure, because this problem was intermittent, but it seemed to go away after I started to cancel right before speak.
utter = new window.SpeechSynthesisUtterance("cat");
window.speechSynthesis.cancel();
window.speechSynthesis.speak(utter);
I don't think the cancel necessarily has to come between the utterance object creation and use. Just that it come before every speak. I may have had a different problem as I was only creating one utterance object, not a bunch. I did only see it on Chrome 78. Using Windows 7, 64-bit. Never saw the problem on Firefox or Edge.
EDIT 2 weeks later. No recurrences after several dozen tries. It seems .cancel() solved my problem. My symptoms were: calling speechSynthesis.speak() in Chrome would sometimes not start the speech. There were no immediate indications of a problem in the code, speechSynthesis.speaking would be true and .pending would be false. There would be no events from the utterance object. Normally, when speech would work, I'd get a 'start' event about 0.1 seconds after calling .speak().
speechSynthesis.speak() is no longer allowed without user activation in Google's Chrome web browser since 2018. It violates autoplay policy of Google Chrome. Thus Google Chrome has managed to revoke it's autoplay functionality but you can make use of it by adding a button to make a custom call.
You can visit here to check the status provided by chrome itself also below is the image attached which clearly shows that speechSynthesis.speak() call is prohibited without user's permission.
Link to image
Link to article by Google Chrome
To add to this, the issue for me was the playback rate on the instance of SpeechSynthesisUtterance was above 2. I discovered it must be set to 2 or less in chrome (although it works with higher rates in other browsers like safari).
In chrome, if the utterance rate is above 2, it causes the window.speechSynthesis to be stuck, and needs window.speechSynthesis.cancel() before it will play audio again (at a valid rate below 2) via .speak().
Did your text to voice tryout work only once? Here is why.
In chrome you have to cancel the speechSynthesis, otherwise its not compliant to googles autoplay policy. So you should start your script with:
window.speechSynthesis.cancel()
To cancel any speech synthesis that happened before.
resultsDisplay = document.getElementById("rd");
startButton = document.getElementById("startbtn");
stopButton = document.getElementById("stopbtn");
recognition = new (window.SpeechRecognition || window.webkitSpeechRecognition || window.mozSpeechRecognition || window.msSpeechRecognition)();
recognition.lang = "en-US";
recognition.interimResults = false;
recognition.maxAlternatives = 5;
recognition.onresult = function(event) {
resultsDisplay.innerHTML = "You Said:" + event.results[0][0].transcript;
};
function start() {
recognition.start();
startButton.style.display = "none";
stopButton.style.display = "block";
}
function stop() {
recognition.stop();
startButton.style.display = "block";
stopButton.style.display = "none";
}
.resultsDisplay {width: 100%; height: 90%;}
#stopbtn {display: none;}
<div class="resultsDisplay" id="rd"></div>
<br/>
<center>
<button onclick="start()" id="startbtn">Start</button>
<button onclick="stop()" id="stopbtn">Stop</button>
</center>
Try
utterance = new SpeechSynthesisUtterance("cat, dog, bark");
speechSynthesis.speak(utterance);
I made a Weave at LiveWeave.
Instead of specifying the text while calling new, you could try specifying an object with rate, volume, and text separately, and then converting it to voice.

How does one implement pull-to-refresh with a LongListSelector in Windows Phone 8?

I am writing a new WP8 app using the off-the-shelf LongListSelector that is shipped in the Microsoft.Phone.Controls assembly. Can anyone provide a code example that implements pull-to-refresh, originally made popular by Tweetie for iPhone and now common on iOS and Android? The existing examples use non-standard controls and I'd like to maintain my use of LongListSelector in WP8.
EDIT
I have found a good answer on StackOverflow describing the Twitter sample and how to do this in more detail:
Continuous Pagination with LongListSelector
You do not.
Pull-to-refresh is not a standard Windows Phone interaction, and you therefore should not implement it.
No native/first-party Windows Phone application use this functionality, and almost no third-party application does either. There is a reason for that.
To refresh the content of a page (or in your case, a LongListSelector), you should use a refresh ApplicationBacIconButton, just like in the Mail app. That's the standard and preferred way to manage refreshes.
Windows Phone is not Android, nor is it iOS. Keep that in mind when designing an application for it.
It is not a zoo, there are rules.
Actually, I just discovered a project uploaded to the Windows Phone Dev Center on November 30, 2012 that implements "infinite scrolling" using Twitter Search and Windows Phone 8 LongListSelector.
Download this project at: http://code.msdn.microsoft.com/wpapps/TwitterSearch-Windows-b7fc4e5e
If you really must do this (see answer by Miguel Rochefort) then details can be found at http://blogs.msdn.com/b/jasongin/archive/2011/04/13/pull-down-to-refresh-a-wp7-listbox-or-scrollviewer.aspx
Basically, the ScrollViewer has hidden/undocumented states that allow for detecting "compression" at the top or bottom of the list and you can use this to trigger the loading.
This is not completely trivial, but one way of doing it is to use GestureService
this.gestureListener = GestureService.GetGestureListener(containerPage);
this.gestureListener.DragStarted += gestureListener_DragStarted;
this.gestureListener.DragCompleted += gestureListener_DragCompleted;
this.gestureListener.DragDelta += gestureListener_DragDelta;
However, it has some bugs. For example, DragCompleted is not always raised, so you need to double-check for that using ManipulationCompleted event, which seems to be more reliable.
containerPage.ManipulationStarted += delegate { this.manipulationInProgress = true; };
containerPage.ManipulationCompleted += delegate
{
this.manipulationInProgress = false;
PerformDragComplete();
};
Another issue is that DragDelta occasionally reports bad coordinates. So you would need a fix like this:
Point refPosition = e.GetPosition(null);
if (refPosition.X == 0 && refPosition.Y == 0)
{
Tracer.WriteLine("Skipping buggy event");
return;
}
Finally, you can find if list is all the way at the top:
public double VerticalOffset
{
get
{
ViewportControl viewportControl = this.FindChildByName("ViewportControl") as ViewportControl;
if (viewportControl != null)
{
Tracer.WriteLine("ViewPort.Bounds.Top=" + viewportControl.Bounds.Top + " ViewPort.Top=" + viewportControl.Viewport.Top.ToString() + " State=" + this.ManipulationState);
return viewportControl.Bounds.Top - viewportControl.Viewport.Top;
}
return double.NaN;
}
}
You can check out the samples in
https://github.com/Kinnara/WPToolkit
it has an excellent implementation something called a ListView extension of the longllistselector control, that will really help you out.
and remember with longlistselector always try to load 20 items atleast. =)
As the WP8 LLS doesn't use a scrollviewer, I guess you will have to inspect the UI tree to get a hold on the viewport control and see what you can do with ViewportControl.Viewport property ...
Oh ... the twitter application is now using the pull to refresh interaction. I like the UI guidelines of the WP platform but rules, once mastered, are made to be broken ;)
This post here can give you hints on how to get the viewport control and retreive the scrolling offset. this scrolling offset must be of a particular value when the list is bouncing

Preloading images (in Chrome) [duplicate]

I am pre-loading some images and then using them in a lightbox. The problem I have is that although the images are loading, they aren't being displayed by the browser.
This issue is specific to Chrome. It has persisted through Chrome 8 - 10, and I've been trying on and off to fix it all this time and have got nowhere.
I have read these similar questions,
Chrome not displaying images though assets are being delivered to browser
2 Minor Crossbrowser CSS Issues. Background images not displaying in Google Chrome?
JavaScript preloaded images are getting reloaded
Which all detail similar behaviour but in Chrome for Mac. Whereas this is happening in Windows.
All other browsers seem to be fine.
If you have Firefox and Chrome open, load the page in Firefox, and then in Chrome, the images appear.
Once you have manually loaded the images, using the Webkit webdev toolbar thingy, they always show up
All the links the images and such are fine and working
Clearing everything from Chrome doesn't seem to make any difference (cache, history, etc)
If anyone has any ideas it would be fantastically helpfull, as I'm literally all out of options here.
PS, Apologies if there are late replies, I'm off on holiday for a week tomorrow! :D
Update
Here is the javascript function which is preloading the images.
var preloaded = new Array();
function preload_images() {
for (var i = 0; i < arguments.length; i++){
document.write('<');
document.write('img src=\"'+arguments[i]+'\" style=\"display:none;\">');
};
};
Update
I'm still having issues with this, and I've removed the whole preloading images function. Perhaps delivering a style sheet via document.write() isn't the best way?
Chrome might not be preloading them as it's writing to the DOM with no display, so it might be intelligent enough to realise it doesn't need to be rendered. Try this instead:
var preloaded = new Array();
function preload_images(){
for (var x = 0; x < preload_images.arguments.length; x++)
{
preloaded[x] = new Image();
preloaded[x].src = preload_images.arguments[x];
}
}
The Javascript Image object has a lot of useful functions as well you might find useful:
http://www.javascriptkit.com/jsref/image.shtml
onabort()
Code is executed when user aborts the
downloading of the image.
onerror()
Code is executed when an error occurs
with the loading of the image (ie: not
found). Example(s)
onload()
Code is executed when the image
successfully and completely downloads.
And then you also have the complete property which true/false tells you if the image has fully (pre)loaded.
It turns out that Chrome takes into account the HTTP Caching and discards any preloaded images immediately after preload if the Caching is incorrectly set to expire.
In my case I am generating the images dynamically and by default the response was sent to the browser with immediate expiration.
To fix it I had to set the following below:
Response.Cache.SetExpires(DateTime.Now.AddYears(1));
Response.Cache.SetCacheability(HttpCacheability.Public);
return File(jpegStream, "image/jpeg");