how to make a camera in tk? - tcl

I have a grid based coordinate system where the distance between each grid center is a set number of pixels. Doing this in OpenGL I made a camera using view/projection matrices. Changing view center and scale for the whole scene could be made by setting matrix elements.
This is from a prototype using OpenGL that I made before going full Tcl/Tk. The node positions are {[1..5], [1..3]} (text at fractional y-offset):
How would I accomplish this in Tcl/Tk? I'm reading about canvas zooming however was hoping for a smaller example; create line between 1..2, 3..4, pan, zoom, pan again, zoom again.

The Tk canvas widget supports panning (it's following the Tk scrollable window protocol) but doesn't allow zooming of the viewport. (It has a scale method, but that just changes the coordinates of the objects, which is not the same thing; in particular, it doesn't change things like font sizes, embedded window/image sizes, line thicknesses, etc.) You can control the panning via the xview and yview methods, as with any scrollable Tk widget.
However, there's an alternative that might be more suitable for you.
Paul Obermeier's Tcl3D, which allows you to use OpenGL together with Tcl/Tk, would let you keep a lot of what you've done so far with writing OpenGL code, and yet it still provides the benefits of being able to use Tcl/Tk. Basically, the core of it is that it includes Togl, a package for presenting an OpenGL drawing surface as a Tk widget, together with bindings to the OpenGL API. I know of quite a few people who use Tcl3D to power their (both commercial and open-source) applications, and you'd end up with a hybrid that should provide the best of both worlds. (Also, it's considered entirely reasonable to include some custom C code in a Tcl application that needs a bit more performance in a critical spot; it's outright good style to do this. Complex GUIs sometimes need that sort of thing.)

Related

Cesium Resampling

I know that Cesium offers several different interpolation methods, including linear (or bilinear in 2D), Hermite, and Lagrange. One can use these methods to resample sets of points and/or create curves that approximate sampled points, etc.
However, the question I have is what method does Cesium use internally when it is rendering a 3D scene and the user is zooming/panning all over the place? This is not a case where the programmer has access to the raster, etc, so one can't just get in the middle of it all and call the interpolation functions directly. Cesium is doing its own thing as quickly as it can in response to user control.
My hunch is that the default is bilinear, but I don't know that nor can I find any documentation that explicitly says what is used. Further, is there a way I can force Cesium to use a specific resampling method during these activities, such as Lagrange resampling? That, in fact, is what I need to do: force Cesium to employ Lagrange resampling during scene rendering. Any suggestions would be appreciated.
EDIT: Here's a more detailed description of the problem…
Suppose I use Cesium to set up a 3-D model of the Earth including a greyscale image chip at its proper location on the model Earth's surface, and then I display the results in a Cesium window. If the view point is far enough from the Earth's surface, then the number of pixels displayed in the image chip part of the window will be fewer than the actual number of pixels that are available in the image chip source. Some downsampling will occur. Likewise, if the user zooms in repeatedly, there will come a point at which there are more pixels displayed across the image chip than the actual number of pixels in the image chip source. Some upsampling will occur. In general, every time Cesium draws a frame that includes a pixel data source there is resampling happening. It could be nearest neighbor (doubt it), linear (probably), cubic, Lagrange, Hermite, or any one of a number of different resampling techniques. At my company, we are using Cesium as part of a large government program which requires the use of Lagrange resampling to ensure image quality. (The NGA has deemed that best for its programs and analyst tools, and they have made it a compliance requirement. So we have no choice.)
So here's the problem: while the user is interacting with the model, for instance zooming in, the drawing process is not in the programmer's control. The resampling is either happening in the Cesium layer itself (hopefully) or in even still lower layers (for instance, the WebGL functions that Cesium may be relying on). So I have no clue which technique is used for this resampling. Worse, if that technique is not Lagrange, then I don't have any clue how to change it.
So the question(s) would be this: is Cesium doing the resampling explicitly? If so, then what technique is it using? If not, then what drawing packages and functions are Cesium relying on to render an image file onto the map? (I can try to dig down and determine what techniques those layers may be using, and/or have available.)
UPDATE: Wow, my original answer was a total misunderstanding of your question, so I've rewritten from scratch.
With the new edits, it's clear your question is about how images are resampled for the screen while rendering. These
images are texturemaps, in WebGL, and the process of getting them to the screen quickly is implemented in hardware,
on the graphics card itself. Software on the CPU is not performant enough to map individual pixels to the screen
one at a time, which is why we have hardware-accelerated 3D cards.
Now for the bad news: This hardware supports nearest neighbor, linear, and mapmapping. That's it. 3D graphics
cards do not use any fancier interpolation, as it needs to be done in a fraction of a second to keep frame rate as high as possible.
Mapmapping is described well by #gman in his article WebGL 3D Textures. It's
a long article but search for the word "mipmap" and skip ahead to his description of that. Basically a single image is reduced
into smaller images prior to rendering, so an appropriately-sized starting point can be chosen at render time. But there will
always be a final mapping to the screen, and as you can see, the choices are NEAREST or LINEAR.
Quoting #gman's article here:
You can choose what WebGL does by setting the texture filtering for each texture. There are 6 modes
NEAREST = choose 1 pixel from the biggest mip
LINEAR = choose 4 pixels from the biggest mip and blend them
NEAREST_MIPMAP_NEAREST = choose the best mip, then pick one pixel from that mip
LINEAR_MIPMAP_NEAREST = choose the best mip, then blend 4 pixels from that mip
NEAREST_MIPMAP_LINEAR = choose the best 2 mips, choose 1 pixel from each, blend them
LINEAR_MIPMAP_LINEAR = choose the best 2 mips. choose 4 pixels from each, blend them
I guess the best news I can give you is that Cesium uses the best of those, LINEAR_MIPMAP_LINEAR to
do its own rendering. If you have a strict requirement for more time-consuming imagery interpolation, that means you
have a requirement to not use a realtime 3D hardware-accelerated graphics card, as there is no way to do Lagrange image interpolation during a realtime render.

is there a way to bring stage3d to front?

i tried the hello example on the adobe site.
http://www.adobe.com/devnet/flashplayer/articles/hello-triangle.html
it works, but the context3D seems work on the stage's background in the lowest level. if i draw anything it will cover the 3d context.
i want to bring it to front or set it to a certain level. how can i do that?
also i was told if use 2d api and 3d api together , it will lower the performance of 3d,is it truth?In my works ,i still need 2d api ,for example, drawing the textfield .
Everything goes like this (from bottom to top):
StageVideo (1 or more instances) > Stage3D (1 or more instances) > Your regular display list.
And yes, regular display objects may degrade performance of Stage3D, therefore it may be better to use Stage3D alternatives of those. Some Stage3D accelerated frameworks already has some of those built in (like TextField in Starling).
No, you can't bring it to front.
2d and 3d not relates to each other. But of course, if you write 2d stuff that eat 100% of cpu, you'll get overal slow performance.
the only way is to get from the bottom layer of stage3D instance the rendered bitmap and display it on top of you displayList.. but it should work on each frame, thing that will affect the performance a lot and of course no mouse interaction... this solution will work only to display rendered scene on top of stage3D.. just a simulation

Fast and responsive interactive charts/graphs: SVG, Canvas, other?

I am trying to choose the right technology to use for updating a project that basically renders thousands of points in a zoomable, pannable graph. The current implementation, using Protovis, is underperformant. Check it out here:
http://www.planethunters.org/classify
There are about 2000 points when fully zoomed out. Try using the handles on the bottom to zoom in a bit, and drag it to pan around. You will see that it is quite choppy and your CPU usage probably goes up to 100% on one core unless you have a really fast computer. Each change to the focus area calls a redraw to protovis which is pretty darn slow and is worse with more points drawn.
I would like to make some updates to the interface as well as change the underlying visualization technology to be more responsive with animation and interaction. From the following article, it seems like the choice is between another SVG-based library, or a canvas-based one:
http://www.sitepoint.com/how-to-choose-between-canvas-and-svg/
d3.js, which grew out of Protovis, is SVG-based and is supposed to be better at rendering animations. However, I'm dubious as to how much better and what its performance ceiling is. For that reason, I'm also considering a more complete overhaul using a canvas-based library like KineticJS. However, before I get too far into using one approach or another, I'd like to hear from someone who has done a similar web application with this much data and get their opinion.
The most important thing is performance, with a secondary focus on ease of adding other interaction features and programming the animation. There will probably be no more than 2000 points at once, with those small error bars on each one. Zooming in, out, and panning around need to be smooth. If the most recent SVG libraries are decent at this, then perhaps the ease of using d3 will outweigh the increased setup for KineticJS, etc. But if there is a huge performance advantage to using a canvas, especially for people with slower computers, then I would definitely prefer to go that way.
Example of app made by the NYTimes that uses SVG, but still animates acceptably smoothly:
http://www.nytimes.com/interactive/2012/05/17/business/dealbook/how-the-facebook-offering-compares.html . If I can get that performance and not have to write my own canvas drawing code, I would probably go for SVG.
I noticed that some users have used a hybrid of d3.js manipulation combined with canvas rendering. However, I can't find much documentation about this online or get in contact with the OP of that post. If anyone has any experience doing this kind of DOM-to-Canvas (demo, code) implementation, I would like to hear from you as well. It seems to be a good hybrid of being able to manipulate data and having custom control over how to render it (and therefore performance), but I'm wondering if having to load everything into the DOM is still going to slow things down.
I know that there are some existing questions that are similar to this one, but none of them exactly ask the same thing. Thanks for your help.
Follow-up: the implementation I ended up using is at https://github.com/zooniverse/LightCurves
Fortunately, drawing 2000 circles is a pretty easy example to test. So here are four possible implementations, two each of Canvas and SVG:
Canvas geometric zooming
Canvas semantic zooming
SVG geometric zooming
SVG semantic zooming
These examples use D3's zoom behavior to implement zooming and panning. Aside from whether the circles are rendered in Canvas or SVG, the other major distinction is whether you use geometric or semantic zooming.
Geometric zooming means you apply a single transform to the entire viewport: when you zoom in, circles become bigger. Semantic zooming in contrast means you apply transforms to each circle individually: when you zoom in, the circles remain the same size but they spread out. Planethunters.org currently uses semantic zooming, but it might be useful to consider other cases.
Geometric zooming simplifies the implementation: you apply a translate and scale once, and then all the circles are re-rendered. The SVG implementation is particularly simple, updating a single "transform" attribute. The performance of both geometric zooming examples feels more than adequate. For semantic zooming, you'll notice that D3 is significantly faster than Protovis. This is because it's doing a lot less work for each zoom event. (The Protovis version has to recalculate all attributes on all elements.) The Canvas-based semantic zooming is a bit more zippy than SVG, but SVG semantic zooming still feels responsive.
Yet there is no magic bullet for performance, and these four possible approaches don't begin to cover the full space of possibilities. For example, you could combine geometric and semantic zooming, using the geometric approach for panning (updating the "transform" attribute) and only redrawing individual circles while zooming. You could probably even combine one or more of these techniques with CSS3 transforms to add some hardware acceleration (as in the hierarchical edge bundling example), although that can be tricky to implement and may introduce visual artifacts.
Still, my personal preference is to keep as much in SVG as possible, and use Canvas only for the "inner loop" when rendering is the bottleneck. SVG has so many conveniences for development—such as CSS, data-joins and the element inspector—that it is often premature optimization to start with Canvas. Combining Canvas with SVG, as in the Facebook IPO visualization you linked, is a flexible way to retain most of these conveniences while still eking out the best performance. I also used this technique in Cubism.js, where the special case of time-series visualization lends itself well to bitmap caching.
As these examples show, you can use D3 with Canvas, even though parts of D3 are SVG-specific. See also this force-directed graph and this collision detection example.
I think that in your case the decision between canvas and svg is not like a decision between »riding a Horse« or driving a »Porsche«. For me it is more like the decision about the cars color.
Let me explain:
Assuming that, based on the framework the operations
draw a star,
add a star and
remove a star
take linear time. So, if your decision of the framework was good it is a bit faster, otherwise a bit slower.
If you go on assuming that the framework is just fast, than it becomes totally obvious that the lack of performance is caused be the high amount of stars and handling them is something none of the frameworks can do for you, at least I do not know about this.
What I want to say is that the base of the problem leads to a basic problem of computational geometry, namely: range searching and another one of computer graphics: level of detail.
To solve your performance problem you need to implement a good preprocessor which is able to find very fast which stars to display and is perhaps able to cluster stars which are close together, depending on the zoom. The only thing that keeps your view vivid and fast is keeping the number of stars to draw as low possible.
As you stated, that the most important thing is performance, than I would tend to use canvas, because it works without DOM operations. It also offers the opportunity to use webGL, what increases graphic performance a lot.
BTW: did you check paper.js? It uses canvas, but emulates vector graphics.
PS: In this Book you can find a very detailed discussion about graphics on the web, the technologies, pros and cons of canvas, SVG and DHTML.
I recently worked on a near-realtime dashboard (refresh every 5 seconds) and chose to use charts that render using canvas.
We tried Highcharts(SVG based JavaScript Charting library) and CanvasJS(Canvas based JavaScript Charting library). Although Highcharts is a fantastic charting API and offers way more features we decided to use CanvasJS.
We needed to display at least 15 minutes of data per chart (with option to pick range of max two hours).
So for 15 minutes: 900 points(data point per second) x2(line and bar combination chart) x4 charts = 7200 points total.
Using chrome profiler, with CanvasJS the memory never went above 30MB while with Highcharts memory usage exceeded 600MB.
Also with refresh time of 5 seconds CanvasJS rendering was allot more responsive then Highcharts.
We used one timer (setInterval 5 seconds) to make 4 REST API calls to pull the data from back end server which connected to Elasticsearch. Each chart updated as data is received by JQuery.post().
That said for offline reports I would go with Highcharts since its more flexible API.
There's also Zing charts which claims to use either SVG or Canvas but haven't looked at them.
Canvas should be considered when performance is really critical. SVG for flexibility. Not that canvas frameworks aren't flexible, but it takes allot more work for canvas framework to get the same functionality as an svg framework.
Might also look into Meteor Charts, which is built on top of the uber fast KineticJS framework: http://meteorcharts.com/
I also found when we print to PDF a page with SVG graphics, the resulting PDF still contains a vector-based image, while if you print a page with Canvas graphics, the image in the resulting PDF file is rasterized.

HTML5 Canvas and Game Programming

I hope this isn't too open ended.
I'm wondering if there is a better (more battery-friendly) way of doing this --
I have a small HTML 5 game, drawn in a canvas (let's say 500x500). I have some objects whose positions I update every 50ms or so. My current implementation re-draws the entire canvas every 50ms. I can't imagine that being very good for battery life on mobile platforms.
Is there a better way to do this? This must be a common pattern with games.
EDIT:
as requested, here are some more updates:
Right now, the objects are geometric primitives drawn via arcs and lines. I'm not opposed to making these small png/jpg/gif files instead of that'd help out. These are small graphics -- just 15x15 or so.
As the game progresses, more and more of the screen changes at a time. However, at the start, the screen changes relatively slowly (the objects randomly moved a few pixels every 50ms).
Nearly every game with continuous animation like this redraws everything every frame; clever updating algorithms are only applicable when a small part of the screen is changing and there is a nice rule to figure out what is overlapping that part.
Here is some general optimization advice:
Make sure that as much as possible of your graphics are handled by the GPU and not the CPU. (This may be impossible if the user's browser does not use the GPU for 2D canvas rendering, but you can expect upgrades may change that as HTML5 gaming gains popularity.)
This means that you should avoid elaborate clever algorithms in favor of doing as little work as possible in JS code — except that avoiding performing a lot of drawing when it is easy to determine that it will be invisible (e.g. outside the bounds of the screen) is generally worthwhile.
If your target platforms support it (generally not the case for current mobile devices), try using WebGL instead of 2D Canvas. You will have to do more detail work, but WebGL allows you to use operations which are much more likely to be provided efficiently by the GPU hardware.
If your game becomes idle — that is, nothing is actually animating at the moment — stop redrawing. Stop your update loop until the user interacts with the game or a timeout occurs.
It may be helpful for you to add to your question details of what types of graphics you are drawing (e.g. are you using sprites, or geometric primitives? Are you drawing images rotated/scaled? Does most of the screen change or just a few small objects? Are you blending many layers?) and perhaps even a screenshot or two; then we can suggest what sort of optimizations are suitable for your particular game.
Don't draw a background, make it an image and set the CSS background-image of the canvas.
Using requestAnimationFrame should help with battery life.
http://paulirish.com/2011/requestanimationframe-for-smart-animating/
Only do a redraw if something has actually changed. If you haven't already, introduce the concept of invalidations. (ie, the canvas is valid so nothing redraws until something moves. Anything moving within the window of the canvas causes the canvas to become invalid, thus needing a redraw)
If you want to be battery friendly you can use Crafty. This game engine is using modern CSS3 technology so it doesn't need to update a canvas all the time. Look at this example game here.
The way you don't want to redraw entire canvas every frame, it only can be the "Dirty-Check" or "Dirty Matrix" algorithms.
Dirty-check seems more efficient than entire redraw. but I think it depends on your render implementation.
it is not necessary to use it if you are using canvas2D to render. Nearly every game has complex sprites and animation. if you use dirty-check, when a part of sprite or background map need to update, you have to figure out what is overlapping this part. and then clearRect this small area of canvas, and then redraw every sprite or map. etc, what is overlapping.
It means your had to run canvas render api more times than normal render implementation because of the overlapping part. And Canvas2d render performance usually does't sounds efficient.
But if you use WebGL, that maybe quite difference. even though I am not family with WebGL, I do knew that maybe more efficient. Dirty-Check should be a good Choice to match your propose.

Drawing abstract canvas

I'm planning to write a diagram editor-style application, where you organize objects on a canvas. This application will need to support setting viewport, zooming, cropping and a lot of other standard features of such a graph style application. I'm looking for toolkits or frameworks which could supports drawing in a standard mathematical coordinate space (0,0 as center point, extendable in all directions), and will scale, crop and zoom this according to (user) commands. Language doesn't really matter, but the more geared it is towards standard GUI applications the better. I would namely like to be able to reuse standard controls and buttons on the canvas if possible.
I think Qt is your friend here. Offers what you need, is multiplatform, quite well-designed and there are bindings for several languages.
From my experience, anything mid-level like C++ with toolboxes - QT, GTK, Windows API etc is horrible for such a work. Not that they can't do it, just that there's 15 lines of obscure code per each simple operation - they are simply not very efficient and more geared towards creating fixed GUI than arbitrary graphics.
This sounds like a good work for Flash, optionally something on top of SVG, maybe even a web app in Javascript.