I'm developing a game based on the libGDX framework in eclipse that will have smoothly movements between my entities, so for that, I have used TweenEngine (and it is working nice), but recently I found that libGDX have its own class for interpolations.
I know both are for tweening (or for nice and smoothly animations), and I just want to know if there are technical difference or limitations between this 2 options basically because if both are the same, I would opt for the second one since it is already inside libGDX.
From one side there's generally better to use included tools because of integration and final application size (not saying of setting up the environment or exporting project problems).
On the other hand please notice that Libgdx Interpolation starts to be useful only when you are using it with Scene2D actions. And here is the problem in my opinion because you must implement stage and actors mechanisms - which is good idea by itelf but can be almost impossible if you have created almost whole application without it.
Then I would recommend you:
choose Scene2D actions + Interpolation easing if you are able to implement Scene2D in your project and the easing actions are only reason you want to use Tween Engine
choose Universal Tween Engine if you want to stay independent of new machanisms and use Sprites etc in traditional way
Related
I'm making a flash app for AIR. The app is mostly made, but I'm not happy with rendering speed on mobile (render mode - gpu).
I know there is a framework that allows user-friendly way to work with Stage3d called Starling, but I've never used it.
After looking into it and following through some tutorials I've noticed that I need to rename all package flash default classes, e.g flash.display.DisplayObject -> starling.display.DisplayObject.
But such action might be destructive to my code base, plus, I have other frameworks attached that work with some flash package classes.
Is there a way to attach Starling to a complete project without re-naming all the package names, changing assets and re-factoring all frameworks that work with default AIR API?
If you're thinking of switching to Starling, you'll have to redesign your whole rendering code. Starling is no drop-in solution. Just renaming classes in your existing code will not do because it completely replaces flash display list for Direct3D, which does all it's rendering with GPU, with all the differences it brings: bitmapped graphics, texture atlases, careful draw ordering. Learning curve can be a bit steep in the beginning but once you get familiar with basic concepts it's a breeze to work with.
IMHO, it's well worth the effort, especially on mobile. Code that ran in low 10s of FPS in classic display list can easily be made to run at solid 60fps with Starling. Basically, for flash on mobile, Stage3D is the only game in town. And Starling is the best supported and widely accepted framework for 2D stuff on Stage3D, with lots of supporting libraries and a very helpful community of developers.
Go on, take the plunge, you won't regret it.
You can run Starling and a native flash application layer at the same time but it wouldn't give you an optimum experience.
If you want to take full advantage of the gpu acceleration of Stage3d and Starling though it would be preferable to refactor your existing code to use Starling display objects rather than Flash display objects.
You might want to post this question on the Starling forum, they are very helpful guys and it's a thriving developer community! - http://forum.starling-framework.org
I have for a really long time been searching for a tutorial for an augmented reality application where the users can rotate and\or move a rendered 3D object with hand movements. I have found a few example applications of this, but none with viewable sourcecode, or in any way allowing me to change the models loaded.
What I need is either a working application that I can tamper with in AS3, or a decent tutorial for creating this kind of motion tracking. (I have been able to create a motion tracking application in itself, but not as an AR application where the models loaded are the ones affected by my movement.)
I am rather new to augmented reality programming, and only have the basics of marker based AR down, but I really don't need anything fancy, nor do I particularly need to understand the cone 100% (though obviously it would be better if I did). I just need it to work.
What you need is FLAR toolkit.
http://www.libspark.org/wiki/saqoosha/FLARToolKit/en
Here's a tutorial: http://mlab.taik.fi/mediacode/archives/1939
Enjoy
I'm finally taking the time to go from Actionscript 2.0 to Actionscript 3.0, and I'm trying to figure out how to pull off a simple depthing system I had at my disposal in 2 that doesn't seem to be possible in 3.
The code goes something like this:
onClipEvent(enterFrame){
this.swapDepths(1000+Math.ceil(this._y));
}
This way, I could easily get mock 3D effects as something moves up and down on the screen.
Also understand, this is a really basic application of the idea. Usually I'd put in logic to allow multiple movieclips to exist at one Y value.
Whatever the case, with the changes to AS's depthing, this method is no longer possible in this state. Maybe I just have an incomplete knowledge of how the new system works, I am just a hobbyist AS programmer, but is there a better/simpler/more elegant way to pull this off in AS3.0, short of keeping track of every clip/sprite on the stage?
I'm using Adobe Flash CS4 professional, if that makes any difference. Additionally, this isn't of much importance, yet. I'm still getting my bearings, but I came across the depthing changes during a quick project a few weeks ago when learning about adding Child MovieClips, and it seemed like I could only easily add things to the front or back of the stage, not inbetween.
The main difference between AS 2.0 and AS 3.0 is the depth management. In AS 2.0 you could use any depth you wanted. However, with AS 3.0 you can not (leave any depth unused/empty).
There are still several methods that allow you to change depth, swap depths etc. so you are free to use those.
paper.js
EaselJS
fabric.js
KineticJS
Hello guys I am new in html5 canvas development and I am lost in choosing canvas frameworks. There are so many of them that I can't find out what to use. So here I am! I want your help to choose which one is better for my needs. There are my needs
1) I want the framework used Vector graphics, I know canvas is not DOM, I realy don't care about it but what I mean is I want to manipulate with objects after its creation, PaperJS has this feature I don't know about others. If advanced mouse events will be available it would be better.
2) I want to use the framework for images, I will load image and animate them with canvas, move, animate some colors...
3) I want the framework to be fast because of my needs (image animation should be smooth)
4) I want the framework to have good community because I know I will need some help.
So what do you think which one is better for me? and please if you can write down from my list which are the strength and weakness for each framework?
HTML5 canvas is still very fresh environment. You can get impression there is a lot of tools already available, they are often quite immature though.
My answer will cover only part of your question because I used only KineticJS and EaselJS.
You can start from reading opinions at this page (mine is the last one at the bottom).
Speaking shortly KineticJS has lower entry barrier. It's simple drawing library and has some support for mouse events too. At the time I was trying to use it it was barely extendable. I found it really hard to customize for my needs.
EaselJS is a bit harder to start with, but it's more advanced too. Now it's part of other libs set known all together as CreateJS. It seems that lot of development going around there.
Both Kinetic and Easel supports mouse event. I don't remember
well the Kinetic, sensing 'onMouseOver' is costy with Easel though.
Also both mentioned libs allows objects manipulation. You can find
here TweenJS also useful as addition.
Again both Kinetic and Easel allows this. Easel also has support for
sprites - 'animated images' well known for web game developers.
I'm not sure about Kinetic as I haven't reached animation part of my
project before I dropped it (the lib, not the project). With Easel
speed is tricky. It has some optimization methods implemented like
for example objects cache or snapToPixel flag. Examples seems to run
really well. However for my project using Easel smoothness still is
an issue despite quite a lot of effort put in optimization. Maybe I
misused the API or there is still place for more optimalization I
haven't noticed.
Both libs are quite young but seems to be actively developed.
Authors are rather responsive. Community still isn't big, but I
guess CreateJS as more complete set of tools for creating games will
grow faster.
If you want to check here is the project I mentioned. It's a web page made with usage of EaselJS + TweenJS. Still needs some minor tweaking though.
I have a java library which heavily uses java.awt.Graphics2d.
I want to port my library to html5 canvas by using gwt.
So I'm planning to write an interface (or just a class), say common.Graphics2d,
an adapter class, say com.test.awt.Graphics2d, implements common.Graphics2d and uses java.awt.Graphics2d
and another adapter class, say com.test.gwt.Graphics2d, implements common.Graphics2d and uses com.google.gwt.canvas.dom.client.Context2d.
Then I will replace all java.awt.Graphics2d with common.Graphics2d.
So after that, my library will work on both gwt and java.
The problem here is to implement graphics2d methods, and configuration by canvas context 2d. Is it feasible to implement same functionality with canvas?
I have done a similar thing. I have an interface which represents aview and two implementations of said interface. One for Android using its android.graphics classes and a second implementations in GWT using com.google.gwt.canvas.client.Canvas.
The GWT canvas stuff seems pretty full-featured to me. You can draw shapes, display text and images, move, rotate, scale...
It probably depends on the functions you use (for instance color gradient may not be easy). for basic drawing functions, the number of methods you really need to implement is very small.
You can have a look (and reuse) classes from my jvect-clipboard package for instance (on sourceforge). Basically, all geometric methods can use the general path drawing commmand, and you are left with storing colors and the like.
Have a look for instance at the implementation for SVG or WMF output, you will see that the code is pretty simple, especially for SVG (although it doesn't cover all possibilities, in particular gradients).