Calculate string length in pixel without using TextBlock - windows-phone-8

I know a common solution is using TextBlock's ActualWidth but sometimes it can causes poor performance. Therefore, I want to know whether it is possible to calculate string length in pixel directly given font and device info in Windows Phone? Something similar to .NET 's System.Drawing.Graphics.MeasureString()? Any direction is appreciated.

Related

FirebaseVisionImage / ML Toolkit cropRect() support

I am posting this question by request of a Firebase engineer.
I am using the Camera2 API in conjunction with Firebase-mlkit vision. I am using both barcode and on-platform OCR. The things I am trying to decode are mostly labels on equipment. In testing the application I have found that trying to scan the entire camera image produces mixed results. The main problem is that the field of view is too wide.
If there are multiple bar codes in view, firebase returns multiple results. You can sort of work around this by looking at the coordinates and picking the one closest to the center.
When scanning text, it's more or less the same, except that you get multiple Blocks, many times incomplete (you'll get a couple of letters here and there).
You can't just narrow the camera mode, though - for this type of scanning, the user benefits from the "wide" camera view for alignment. The ideal situation would be if you have a camera image (let's say for the sake of argument it's 1920x1080) but only a subset of the image is given to firebase-ml. You can imagine a camera view that has a guide box on the screen, and you orient and zoom the item you want to scan within that box.
You can select what kind of image comes from the Camera2 API but firebase-ml spits out warnings if you choose anything other than YUV_420_488. The problem is that there's not a great way in the Android API to deal with YUV images unless you do it yourself. That's what I ultimately ended up doing - I solved my problem by writing a Renderscript that takes an input YUV, converts it to RGBA, crops it, then applies any rotation if necessary. The result of this is a Bitmap, which I then feed into either the FirebaseVisionBarcodeDetectorOptions or FirebaseVisionTextRecognizer.
Note that the bitmap itself cases mlkit runtime warnings, urging me to use the YUV format instead. This is possible, but difficult. You would have to read the byte array and stride information from the original camera2 yuv image and create your own. The object that comes from camear2 is unfortunately a package-protected class, so you can't subclass it or create your own instance - you'd essentially have to start from scratch. (I'm sure there's a reason Google made this class package protected but it's extremely annoying that they did).
The steps I outlined above all work, but with format warnings from mlkit. What makes it even better is the performance gain - the barcode scanner operating on an 800x300 image takes a tiny fraction as long as it does on the full size image!
It occurs to me that none of this would be necessary if firebase paid attention to cropRect. According to the Image API, cropRect defines what portion of the image is valid. That property seems to be mutable, meaning you can get an Image and change its cropRect after the fact. That sounds perfect. I thought that I could get an Image off of the ImageReader, set cropRect to a subset of that image, and pass it to Firebase and that Firebase would ignore anything outside of cropRect.
This does not seem to be the case. Firebase seems to ignore cropRect. In my opinion, firebase should either support cropRect, or the documentation should explicitly state that it ignores it.
My request to the firebase-mlkit team is:
Define the behavior I should expect with regard to cropRect, and document it more explicitly
Explain at least a little about how images are processed by these recognizers. Why is it so insistent that YUV_420_488 be used? Maybe only the Y channel is used in decoding? Doesn't the recognizer have to convert to RGBA internally? If so, why does it get angry at me when I feed in Bitmaps?
Make these recognizers either pay attention to cropRect, or state that they don't and provide another way to tell these recognizers to work on a subset of the image, so that I can get the performance (reliability and speed) that one would expect out of having to ML correlate/transform/whatever a smaller image.
--Chris

what is COLOR_FormatYUV420Flexible?

I wish to encode video/avc on my android encoder. The encoder (Samsung S5) publishes COLOR_FormatYUV420Flexible as one of its supported formats. yay!
but I dont quite understand what it is and how I can use it. the docs say:
Flexible 12 bits per pixel, subsampled YUV color format with 8-bit chroma and luma components.
Chroma planes are subsampled by 2 both horizontally and vertically. Use this format with Image. This format corresponds to YUV_420_888, and can represent the COLOR_FormatYUV411Planar, COLOR_FormatYUV411PackedPlanar, COLOR_FormatYUV420Planar, COLOR_FormatYUV420PackedPlanar, COLOR_FormatYUV420SemiPlanar and COLOR_FormatYUV420PackedSemiPlanar formats
This seems to suggest that I can use this const with just about any kind of YUV data: planer, semi-planer, packed etc. this seems unlikely: how would the encoder know how to interpret the data unless I specify exactly where the U/V values are?
is there any meta-data that I need to provide in addition to this const? does it just work?
Almost, but not quite.
This constant can be used with almost any form of YUV (both planar, semiplanar, packed and all that). But, the catch is, it's not you who can choose the layout and the encoder has to support it - it's the other way around. The encoder will choose the surface layout and will describe it via the flexible description, and you need to support it, whichever one it happens to be.
In practice, when using this, you don't call getInputBuffers() or getInputBuffer(int index), you call getInputImage(int index), which returns an Image, which contains pointers to the start of the three planes, and their row and pixel strides.
Note - when calling queueInputBuffer afterwards, you have to supply a size parameter, which can be tricky to figure out - see https://stackoverflow.com/a/35403738/3115956 for more details on that.

cocos2d-x: How to specify a pixel format for a specific Texture2D

I've set my app to use the RGBA4444 pixel format by default for its Texture2D objects.
Texture2D::setDefaultAlphaPixelFormat(Texture2D::PixelFormat::RGBA4444);
This works great -- it has cut my memory footprint in half and 16 bit still looks fine for most images. However, some images are a bit messed up... namely those that are primarily in one colour, such as a particular grassy background texture which uses quite a large number of green shades. It looks absolutely horrible in anything less than 32 bit, and I'd rather not have to create a texture that looks nicer when 16 bit'ified.
The best solution for me would be to be able to set the sprite to use RGBA8888 as an exception. After all, 'setDefaultAlphaPixelFormat()' implies that it can be specified otherwise, right? But I have been as-yet unable to find out how.
Can it be done?
A solution is to change the default format before loading and set it back again afterwards. This is how I did it:
Texture2D::PixelFormat defaultFormat = Texture2D::getDefaultAlphaPixelFormat();
Texture2D::setDefaultAlphaPixelFormat(Texture2D::PixelFormat::RGBA8888);
// Load RGBA8888 textures here
TextureCache::getInstance()->addImage(IMAGE_BACKGROUND_GRASS);
Texture2D::setDefaultAlphaPixelFormat(defaultFormat);
I think this is about as good as it gets, thanks LearnCocos2D. :)

glPointSize equivalent in DirectX 11

I'm rendering a starfield composed from points(D3D11_PRIMITIVE_TOPOLOGY_POINTLIST). If a point get closer to the camera I make it double the size. That worked well with OpenGL 1.x using glPointSize(2.0f)
Is there a way to achieve this with DirectX 11 on Windows Phone 8?
What I need is a way to make a rendered point appear bigger on some custom value.
Any thought is greatly appreciated.
There's no native sprite type in D3D11. Your best bet is to use instancing with a single quad VB and a per-instance point VB. You'd scale by dividing the quad point deltas by the view depth, or applying a standard perspective projection matrix (though the latter would also cause the points themselves to converge at higher distances).

Optical character recognition

Hey everyone,
I'm trying to create a program in Java that can read numbers of the screen, and also recognise images on the screen. I was wondering how i can achieve this?
The font of the numbers will always be the same. I have never programmed anything like this before, but my idea of how it works is to have the program take a screenshot, then overlay the image of the numbers with the section of the screenshot image and check if they match, repeating this for each numbers. If this is the correct way to do this, how would i put that in code.
Thanks in advance for any help.
You could always train a neural net to do it for you. They can get pretty accurate sometimes. If you use something like Matlab it actually has capabilities for that already. Apparently there's a neural network library for java (http://neuroph.sourceforge.net/) although I've never used it personally.
Here's a tutorial about using neuroph: http://www.certpal.com/blogs/2010/04/java-neural-networks-and-neuroph-a-tutorial/
You can use a neural network, support vector machine, or other machine learning construct for this. But it will not do the entire job. If you do a screen shot, you are going to be left with a very large image that you will need to find the individual characters on. You also need to deal with the fact that the camera might not be pointed straight at the text that you want to read. You will likely need to use a series of algorithms to lock onto the right parts of the image and then downsample it in a way that size becomes neutral.
Here is a simple Java applet I wrote that does some of this.
http://www.heatonresearch.com/articles/42/page1.html
It lets you draw on a relatively large area and locks in on your char. Then it recognizes it. I am using the alphabet, but digits should be easier. The complete Java source code is included.
One simpler approach could be to use template matching. If the fonts are same, and/or the size (in pixels)is known, then simple template matching can do the job for you. ifsize of input is unknown, you might have to create copies of images at different scales and do the matching at each scale.
One with the extreme value(highest or lowest depending on the method you follow for template matching) is your result.
Follow this link for details