The package org.lwjgl.opengl contains a whole bunch of packages named from GL11 to GL44 - one for every version from OpenGL 1.1 to OpenGL 4.4.
What exactly does this mean? Does each of these packages contain a separate, working version of OpenGL, or does each package contain only items that were introduced in that version? How do I figure out what things are where?
It certainly looks like each class contains only the newly added values/methods. For example the GL44 class contains only a fairly small set of entry points matching new features added in OpenGL 4.4.
Adding a new interface for each version does have advantages:
Existing interfaces are not modified. It is mostly desirable not to modify interfaces once they were publicly exposed. Having various versions of the same interface can be problematic.
It makes it easier for programmers to target a specific OpenGL version, because you can tell which version each entry point is supported in based on the class name.
The downside is that you need to know (or look up) the version where each call was introduced, so that you know which class to use for the call.
I'm surprised that they did not establish an inheritance hierarchy between the classes. That would seem to give the best of both worlds:
Existing class interfaces are not modified when new versions are introduced.
Easy for programmers to target a specific maximum version by using that class.
No need for programmers to take into account the specific version where a call was introduced, as long as it's included in their target version.
This also make conceptual sense, because each version is an extension of the previous version, which matches a subclass relationship. The OpenGL ES Java bindings in Android uses this approach. GLES30 derives from GLES20. Therefore, if you're targeting ES 3.0, you can call all the entry points on GLES30, even the ones that were already present in ES 2.0.
Related
In V1.7.0 I was able to extend from the Node interfaces that came with gdx-ai but in V1.8.0 they do not exist? Double question:
Why?
How can I implement my own Node from scratch? I mean/guess, the other classes need core functionality from the nodes that previously came from the supplied interfaces like IndexedNode, TiledNode, etc.
I specifically need this for the Hierarchical pathfinding implementation. I can roll back to V1.7.0 but if the latest version is not broken I rather use that of course.
I've just started using Unreal Engine 4, but whenever i choose Object as parent class for my new blueprint, i don't have any constructor (like Construction Script function for Actor)
How can i make a constructor?
I don't want to use Actor becaue the class is storing the equipment data for my character.
Thanks!
I'm afraid it's not possible. Technically Construction Script is not Constructor as you know it from C++. It's somewhat misleading name.
Technically C++ equivalent for Construction Script is AActor::OnConstruction(), not class Constructor.
You must also consider the fact that UObjects are not replicated by default. In future if you would want to make your inventory replicated, you would need to either switch to Actors (which is not that good idea), or write it in C++, where you can explicitly tell, which UObjects should replicate as part of Actor or ActorComponent.
https://github.com/iniside/GameInventorySystemPlugin
Here is Inventory Plugin I'm developing. It's still WIP, but basic functionality is now implemented and it should work with 4.6 version of engine. Right now it's combination of C++ and Blueprint. I also recommend rebuilding it from source if you want to try it out, since default binaries are build against source version of engine.
It should give you nice starting point, to either see how things are done, or just straight to use it.
Cocos2d-x 3.0 alpha was released for some time now. What was improved over cocos2dx-2?
The features list is quite important, but in terms of performance are there new limitations/improvements?
Have you noticed real improvements in performance, development patterns, APIs and support?
I've been using it recently and from what I've noticed the main differences are that everything is namespaced now, so you don't have to deal with the prefixed names that came from the objective c patterns, so cocos2d::Point instead of CCPoint (especially for enums, (Texture2D::PixelFormat::RGBA8888 instead of kCCTexture2DPixelFormat_RGBA8888)).
Also some of the event stuff now has support for c++11 lambdas.
A more complete list of the changes can be found here: http://www.cocos2d-x.org/wiki/Release_Notes_for_Cocos2d-x_v300
but for the most part of using it myself, it's just made to feel more like C++, instead of like objective-c.
I have switched and am finding it pretty stable. The main advantages so far ...
Real buttons, instead of menus
Real-time spritesheets
SpriteBatchNodes are no longer recommended and I did see a drop in draw calls where I not optimized
less objective C patterns.
more modern. namespaced instead of 'CC'. C++11.
more platforms supported
Main disadvantages for me:
EventListener pattern. I can't figure out how to get touch input to affect any objects other than the Node that triggered the event.
We use a lot of text-only buttons for debugging and they are hard to lay out :)
Lack of documentation and example code. For example, I could not find any documentation of how to use the Layout class anywhere.
It is a lot of work porting, but for us we had to decided to risk it since we would end up maintaining an out-of-date code base. It took about 5 person-days to port our game over. The game is now stable and we did not run into a single bug in cocos.
I think its C++11
auto
lambda
And it has no unnecessary use of prefix CC
One of the changes that happened between Cocos2d-x 2.1.5 and 2.2 was the removal of templates for projects in XCode (I do not know if project templates existed in VS, etc).
The new build system creates projects under the Cocos2d-x installation (at least on Mac) and that is where the project files appear to reference them. This makes it very difficult to move the project without hand tweaking. It also makes configuration management more painful, depending on how you set up your system (e.g. a root/tree like svn or a "drop it anywhere" like git).
Also, the Cocos2d-x library is built as that, a library. In previous incarnations, it was placed directly into the project. On one hand, if you don't alter the root library code, this makes good sense. On the other hand, if you occasionally tweak things for a specific project, you have altered all your projects that depend on it. Yin/Yang.
I'm still very positive on Cocos2d-x. I have not upgraded to 3.0 or 2.2 yet. When it matures a little more, I will switch over, regardless the changes. For what I need, I'm pretty sure it will still get the job done (well).
In the context of webkit in chromium source code ,it says it is source compatible but not binary compatible. Does it suggest that we build .dll file of webkit and build it with chrome binary ?
(This answer doesn't talk about the specific context of WebKit - it's not clear what exactly you mean by the various "it says" parts. I've tried to give a more general answer.)
Suppose we have a library called LibFoo, and you have built an application called SuperBar which uses LibFoo v1.
Now LibFoo v1.1 comes out.
If this is binary compatible, then you should be able to just drop in the new binary, and SuperBar will work using the new code without any other changes
If this is only source compatible then you need to rebuild SuperBar against v1.1 before you'll be able to use it
I would think of it from the point of view of Linking
Linking is the process of taking a class or interface and combining it into the run-time state of the Java Virtual Machine so that it can be executed.
Linking a class or interface involves verifying and preparing that
class or interface, its direct superclass, its direct superinterfaces,
and its element type (if it is an array type), if necessary.
If introducing a new change breaks the linking, then it is not source (code) compatible (as well as binary compatible)
If introducing a new change does not break the linking, then it is at least binary compatible
I am designing a class library designed to solve a wide scope of problems. One thing about this library is that it will be usable by several different languages and environments natively. For example, there will be a C++ version written entirely in C++, a .NET version written in C# and a Java version written in Java, without any dependencies on each other... as opposed to writing the core library in C++ and simply providing .NET and Java bindings to it.
The library in each of its different forms sets out to solve a different but sometimes very similar set of problems. For example, there might be many classes whose members will be functionally identical in each language, and there will also be many classes that will be present in only one or two language-versions of the library, but not the others. Take a class or struct representing a program's version number. .NET already has such as class (System.Version) so I would not include it in my .NET version but the C++ and Java libraries would provide one.
The problem I am facing is that for classes which will exist in most or all versions of the library, the documentation will remain relatively the same (obviously). The brief text for both the C++ and Java version for a Version struct would be something like "Represents a software version number in the form major.minor.build.revision"... as would the detailed class description, and all the members' documentation, etc. As you know, .NET, Java and C++ all have their own documentation syntax. Is there any way I can attempt to consolidate documentation in a language-neutral way (WITHOUT writing the documentation separately from the source code - e.g. manual documentation as opposed to generating it using doxygen/sandcastle/javadoc) or am I stuck copying and pasting the same text into the source files of each version?
I was having the same issues and decided there were just two options for me:
Using the same documentation generator in all languages. If you use doxygen (or ROBODoc, or whatever) for all of them, you would have just one doc syntax for all languages. This means that you have to break with language-specific conventions, though.
Write your own doc parser. Which is hard work, especially for a language with quite complex syntactic rules (as C++.)
We are currently using doxygen for such projects.