Understanding Memory Counters in WP8 - windows-phone-8

Developing a Windows 8 Phone app.
I have noticed that there are these performance counters on the right hand-side of my device/emulator.
On my device the 3rd number - 35 (etc) is constantly displayed in a red font. On the emulator it is not (see image below). I am not too concerned about that as I am sure the emulator is not a direct parallel to a real device. What I am concerned about is that it is red on my device. Should I be worried about this?
Also, on a side-note on a previous StackOverFlow question What do the numbers mean in the upper right hand side of the emulator?
There are 6 different numbers.
I only have 4.
So, is this the surface counter? That would make sense as i am updating my img controls.
Any enlightenment would be great.
sd

Related

Incorrect touch location on 4k screen

I've been having some problems with touch detection on my current project, the problem is the higher the touch is on either axis the further off the touch point the touch registers.
In the image below you can see the issue i am talking about. The dot starts by rendering to the left of the mouse but by the time the mouse reaches the right side of the screen the dot is to the right of the mouse. The issue is much more obvious on the y axis.
The code running to create the above test is just the LibGDX SimpleTouchTest from https://github.com/libgdx/libgdx/wiki/Mouse%2C-touch-%26-keyboard so it is the absolute bare minimum.
The weird thing is, this only appears to be an issue on my laptop, on my desktop the dot sits under the mouse at all times. Both computers are running windows 10 so the only difference I can see is my laptop has a 4k display.
Has anyone found a solution to this issue?
Thanks
Adrian
So for some reason this is now working. Nothing has changed in the code so I can only assume something was out of wack within windows or the graphics driver that has some how sorted itself out over the last few weeks. Thanks to those that took the time to read my question.

WebGL and GPU affinity with PixiJS

I have two separate graphics cards on my machine (2x NVS 315) and three monitors attached to them (2 monitors on one card, 1 monitor on the other card).
To see how well my PixiJS code would perform when running side-by-side on multiple browser windows, I opened one Chrome window in each monitor.
To my surprise, the GPU handling my primary screen (GPU#1) was doing most of the work (100% usage really) while the other GPU (GPU#2) stayed at around 40% usage. This was still true even after I ran a browser window only on the monitor attached to GPU#2.
My expectation was that GPU #2 would do all the work when a browser window was rendered on the monitor attached to it. Apparently this is not case and GPU#1 was still on 70% usage while GPU#2 was on 40% usage.
To be honest, I'm pretty sure this is not an issue with PixiJS but rather an issue with Chrome/WebGL/OpenGL.
I then made a few experiments with other OpenGL games running in both windowed mode and fullscreen and saw the same behaviour. It seems the GPU associated with the primary screen always does most of the work.
I saw a possible explanation of this behaviour here: https://superuser.com/questions/731852/how-is-gpu-affinity-decided-in-a-multi-gpu-configuration#comment939363_731852
Is there any way in WebGL/PixiJS to specify GPU affinity?
Not as far as I know.
Very few apps switch GPUs. Almost all of them just use the primary GPU (as in default/first). Rendering happens on that GPU and then results are transferred to the other GPU so they can be put on the screen. Microsoft has a few examples of how to switch GPUs by checking which screen the window is mostly on but few apps that I know of use that. At best, a few games that go fullscreen will use the correct GPU for each screen.

How can I use Oculus Rift as monitor?

Although not completely a programming related question, I need this information acquired to progress in the development of a game that is not run through the Oculus Rift, but rather run normally on as an application on the screen featuring the two "eye holes".
I have got the full release of Oculus, not one of the development kits. From what I've read and understood, these mirrored the screen by default, the full version does not. The full version from what I've heard instead uses its own driver for displaying graphics, not actually visible on the screen.
What I would like to do, is switch over so that I can see the screen through Oculus. How would one go about doing this?
Should you feel this question is not on topic under the Stack Overflow community, do tell where this should be posted. I do feel however that programmers are very right for the nature of this question.
What I would like to do, is switch over so that I can see the screen through Oculus. How would one go about doing this?
There are a number of applications that do things along these lines, including SteamVR, and Virtual Desktop, and Envelop VR.
From what I've read and understood, these mirrored the screen by default, the full version does not.
At no point has the Rift actually acted to mirror what's on screen. What you may be referring to is that previous development kits, when plugged in, would be visible to the operating system as just another monitor. However, anything that was on this monitor (windows, backgrounds, etc) was not actually usable, because the Rift display is divided in half, so each eye would see part of a distorted half of the screen. Completely unusable as a 'monitor replacement'.
Should you feel this question is not on topic under the Stack Overflow community, do tell where this should be posted.
Probably SuperUser, or alternatively a proposed VR specific stack exchange.
I do feel however that programmers are very right for the nature of this question.
Lots of questions are useful to software developers, but not specific to them. This site is for questions specific to software developers.
You want the Virtual Desktop app.

Why not always use 2x images when building web pages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I was having a discussion with a designer friend yesterday, the salient points of which I'll detail below:
2x images are larger files, but not the 4x you may think. In one example a 1x image file is 47kb & its 2x counterpart is only 55kb.
2x images are only for Retina screens, and despite being phased in on the desktop/laptop side, the truth is that most retina screens are mobile.
While wifi is becoming fairly ubiquitous, desktops (mostly 1x) are the only devices that never have to download data off a cell network.
These all led me to posit the question: Why are we spending energy on providing 2x images, when they are mostly accessed by mobile devices which have the greatest bandwidth limitations?
After sleeping on it, I started to wonder: well fine, if we're going to ignore that last issue, why not just only deal in 2x? CSS can handle scaling down the images in any case (perhaps I'm wrong here?) so why not save the media queries & save the effort to generate and store 2 copies of every image by just using 2x everywhere?
Am I crazy?
The file size issue does bother me. I think things should be as small as possible.
If you're not worried about that, though, the only, temporary issue I can think of is browser support for background-size. IE8 doesn't support it and it's still used enough to have to worry about it (at least on my projects). There's a polyfill for it, but it's not up to snuff with the real thing.
I think the answer depends on where you live in the world. Believe it or not, I live in an area with a lot of country roads that still only have dial-up. Sometimes, even that does not work. We are still not nearly as high-speed as we should be with desktops. I can't imagine them having to download that extra data when they view 1x.
So, I think it depends on your target audience, where they are more likely to live or what devices they use. We will get there in time, but for some, not yet.
Let analytics be your guide
I use the 5% rule. Once any feature is more than 2 standard deviations out of the norm, I drop support for it. In browserland, that means IE6 and IE7 are gone for me but I keep supporting IE8 because a size-able chunk of the audience is using that feature. Yeah the big guys like Google have dropped it but I still see a good chunk of traffic from it on a lot of sites. Why make them suffer?
Now how this relates to your question: ask yourself what percentage of your audience is on a 10Mbps+ LTE connection with retina screen. Maybe in your case it's 95% retina screens with LTE on mobile, but check your analytics package. My guess is that it's probably under 20%, in which case having fall-backs gives a better UX to 80% of your audience - easily worth the effort.
In my opinion I see those problems:
some older phone models (ex: iPhone 3G) and tablets (ex: iPad 1) have low memory. A big enough image can cause out of memory errors.
to scale an image the system has to load it at full size and do a complex scaling operation each time it draws it (sometimes it is cached).
a scaled down image doesn't look as good.
you can run into problems with them in older browsers (as mentioned by Bill Criswell)
it increases download size. If we consider a 10kb increase in size / picture * 10 pictures per page than you get 100kb per page load. If your page has to display a lot of images (think social), than the overhead is a lot.
you can improve your search engine rankings if your page loads faster and it is smaller.
The only major issue is file size. And as you state, in a lot of cases, the file size differences are minimal.
If we're talking mostly icons, the benefit is that a) icons aren't huge to begin with, so the file size increase is minimal and b) icons benefit the most from retina resolution.
On the other hand, if we're talking 'full screen' news photos, those could be quite a bit larger file-size-wise, but also look perfectly fine if they are not retina (as they are continuous tone) so there's a less compelling need to make those retina if you are targeting a mobile device.
A compromise for the latter might be to lazy-load them. Check the screen size. If phone-sized, load the regular image and call it good. If it's larger than phone size, load the regular image, then go back and grab the retina version for those on an iPad 3, for example.
The only technical problem is IE8 and older. They can't handle the CSS you'd typically use for retina images. There are workarounds, but not for sprites--which you'd commonly use for icons.
Eventually, we'll see more SVG support, which will solve this problem--at least for icons. When I am doing pure iOS work, for example, most all of my imagery is SVG now. It's smaller, and automatically retina-ready.

Is it necessary to fill in the appxmanifest with all tile resolution images?

The appxmanifest has entries for developers to fill in four different tile resolution images for each kind of tile (small, medium, wide, large), the different resolutions are: scale-180, scale-140, scale-100, scale-80.
From what I can tell if the developer fills in the largest resolution image (scale-180) the system will automatically scale it down when needed on lower resolution displays, thus it works everywhere. So do most developers really even need to bother filling in all the different tile resolution images? Seems like it will just bloat the size of the application for nothing.
My question about filling in the different resolution images not about whether I need to have the small, medium, wide and large images.
It's certainly not necessary, but it is recommended. Scaling images down (or up) can cause visual artifacts that may not be present in the original image. It may be the case that a particular image looks fine in all scaling plateaus, but others may not (this is especially true for bitmap images). Again, it is up to your discretion, but worth visual inspection at each scaling level.
MSDN has a scaling guidelines document that could be helpful.
See the reasons for filling wide tile and small tiles are for different reason.
1 - wide tile
if your app has a live tile notification feature then u need to implement it. But not mandatory. Just a criteria of metro app.
2 - small tile. This is the by default tile of the application so you should go ahead and add it. If theres no live tile concept then you can fill only small tile. The square one.
3 - then there is a place to put in the image that will be used as a tile icon when user searches for it.
and then there is store logo.
all these are by default set to one cross marked logo. Which u generally see. None of them are mandatory but finally while uploading to the store they will become requirement specific and mandatory. Otherwise you will get a long list of improvements to be done on your app from microsoft. :) . Have faced that .