Minecraft, Custom Libraries and LWJGL - lwjgl

What would be my best bet for trying to roll my own libraries when I want to launch Minecraft on an ARM device? Standard Mojang repositories only have x86 versions of LWJGL in particular, and trying to overwrite the copies in .minecraft/libraries with armhf version triggered a corrupt file error and a redownload cycle with the x86 version.

I think you are wanting to run Minecraft on mobile? Correct me if this is incorrect, Because LWJGL is already compatible with ARM, it doesn't even care which CPU it is running on (e.g AMD CPUs are ARM) because the two main APIs it is using (Java and OpenGL) both don't either - It even supports all OSs because of this too, it's just preferred that you use 64-bit libraries on 64-bit systems for speed purposes, you can always just use 32-bit even.
I would also like to clear up that Mojang did not make LWJGL so searching their repositories isn't the place to look. Plus if it were possible to simply change the natives they would have done that instead of make it again..

Related

How to apply the OpenGL function

I have a problem of applying some functions of OpenGL(e.g. glDeleteBuffers).
My computer's spec is;
Renderer: AMD Radeon HD 6800 Series
Operating system: Windows 7
Intel(R) Core(TM) i7-2600
I used OpenGL Extensions Viewer 4.4.3 to view information about OpenGL.
I update the latest version of graphic card and found that version of OpenGL is 4.4 and it shows like below picture:
I am not sure what I can do more from now. I would like to use functions like glDeletBuffers, glGenBuffers, glBindBuffer, glBufferData...
Give me some help
The functions you've listed aren't loaded by default even on systems that have hardware supporting modern OpenGL. To get access to these functions you need to query the extension and load it if available.
A guide for querying and loading functions yourself can be found here
If you simply want to load the extensions associated with a OpenGL version (such as 4.1/3.1 etc.) you can use something like GLEW to simply handle the querying and loading for you.
If you want to use older functionality too (while not advised) then make sure to look at loading a compatibility profile to support deprecated version functionality, i.e if you are using the fixed-function pipeline flow. Not that I'd advise it!

Is it possible to create a portable UWP app (=no installation needed)

The UWP infrastructure seems to have everything what's needed for a portable model.
Clear separation between os and application
Clear separation between different applications
Less dependencies
Support portable class libraries
As far I know portable scenario's are not supported right now. Is it something that we can expect in the future or is it intrinsic impossible due the architecture of UWP/WinRT
How hard would it be to create some kind of host executable that can run any local UWP app. At the moment I'm looking for portability between different Windows 10 PC's. Not so much cross device or cross OS.
I'm aware you can side load UWP apps, but that's not what I'm looking for.
Is it something that we can expect in the future or is it intrinsic impossible due the architecture of UWP/WinRT
I don't see any major technical limitations that would prevent this scenario. UWP apps can register to some global mechanisms (which is something portable apps shouldn't do), like push notifications or background tasks, but the whole application model has been designed so that users can limit access to those features on a per-application basis. So every developer publishing an app is supposed to have considered beforehand that those code-paths may fail.
But "technically possible" doesn't mean that Microsoft will do it. In fact, I seriously doubt they ever will. The reason is simple: they're pushing the store with all their might, even seeking to put Win32 apps on it. Clearly, they're moving towards putting more apps on the store, not the other way around.
As to know whether it'd be possible to make a third-party standalone runner, I think so. When running unit tests for an UWP app, Visual Studio is launching a sort of "shell" hosting the app (it has become very apparent recently because after an update of Windows 10, the API that allowed to hide the splashscreen wasn't working anymore). I don't know what API is used to create this shell, but I'd definitely dig that way if I wanted to make a portable UWP host.
Although I haven't done this myself (will update answer if and when), reading this article makes it look like there is an easy way to create an installer that calls that command.
In short, an appx package can be installed locally using the command:
C:\Program Files (x86)\Windows Kits\10\bin\x86\WinAppDeployCmd.exe
Which can probably be wrapped in a UI or CMD installer.
Here's nice example of it (not mine).

What means "runtime" in Intel XDK?

I am completely new to mobile development and i interested to hybrid apps and of course Intel XDK. When i reading Intel XDK's documentations i faced to "runtime" word.
What is mean of this phrase and actually what is it's mean when they saying
Crosswalk is a HTML5 runtime, ...
in their blog post.
Apologize me if I asking one of stackoverflow's stupid questions.
"Runtime" generally refers to the time a program runs, as opposed to other times in the programs life cycle like compile time. Software is typically written, then compiled, then deployed or installed, then run. These different phases are often referred to as "compile time", "install time" and "runtime". E.g. "at compile time, the compiler will ...".
"A runtime" is a software package that is necessary at, well, runtime. E.g. a library, interpreter, virtual machine or other such supporting packages. Crosswalk is a system that takes in HTML 5 "apps" (including HTML, Javascript etc.) and, well, makes them work. Executes them. Runs them. It's essentially a browser engine, but not packaged as a browser, just as a system that can interpret and run HTML 5, which you can embed into whatever other software you want.
So you could write your application in HTML 5; then to make that work like a native desktop application which can be downloaded and installed, you write a tiny Windows application and an OS X application which basically just contains Crosswalk, which runs your actual application inside it, cross-platform.
Run-time - the length of time it takes to execute a software program
The Intel XDK includes the Crosswalk runtime as a supported target of the cloud-based build service.
Crosswalk runtime can be used for creating HTML5 application with a dedicated runtime that can be run on multiple platforms without any dependency for webview. It can be used for creating application that uses HTML5 features like WebRTC, hardware-accelerated WebGL support and bleeding edge HTML5 features.
For more information please follow this resources..
crosswalk-runtime
more resources : Using the Intel XDK “Crosswalk for Android”

Defining a runtime environment

I need to define a runtime environment for my development. The first idea is of course not to reinvent the wheel. I downloaded macports, used easy_install, tried fink. I always had problems. Right now, for example, I am not able to compile scipy because the MacPorts installer wants to download and install gcc43, but this does not compile on Snow Leopard. A bug is open for this issue, but I am basically tied to them for my runtime to be usable.
A technique I learned some time ago, was to write a makefile to download and build the runtime/libs with clearly specified versions of libraries and utilities. This predates the MacPorts/fink/apt approach, but you have much more control on it, although you have to do everything by hand. Of course, this can become a nightmare on its own if the runtime grows, but if you find a problem, you can use patch and fix the issue on the downloaded package, then build it.
I have multiple questions:
What is your technique to prepare a well-defined runtime/library collection for your development?
Does MacPorts/fink/whatever allows me the same flexibility of rehacking if something goes wrong ?
Considering my makefile solution, when my software is finally out for download, what are your suggestions about solving the potential troubles between my development environment and the actual platform on my user's machines ?
Edit: What I don't understand in particular is that other projects don't give me hints. For example, I just downloaded scipy, a complex library with lots of dependencies. Developers must have all the deps setup before working on it. Despite this, there's nothing in the svn that creates this environment.
Edit: Added a bounty to the question. I think this is an important issue and it deserves to get more answers. I will consider best those answers with real world examples with particular attention towards any arisen issues and their solution.
Additional questions to inspire for the Bounty:
Do you perform testing on your environment (to check proper installation, e.g. on an integration machine) ?
How do you include your environment at shipping time ? If it's C, do you statically link it, or ship the dynamic library, tinkering the LD_LIBRARY_PATH before running the executable? What about the same issue for python, perl, and other ?
Do you stick to the runtime, or update it as time passes? Do you download "trunk" packages of your dependency libraries or a fixed version?
How do you deal with situations like: library foo needs python 2.5, but you need to develop in python 2.4 because library bar does not work with python 2.5 ?
We use a CMake script that generates Makefiles that download (mainly through SVN)/configure/build all our dependencies. Why CMake? Multiplatform. This works quite well, and we support invocation of scons/autopain/cmake. As we build on several platforms (Windows, MacOSX, a bunch of Linux variants) we also support different compile flags etc based on the operating system. Typically a library has a default configuration, and if we encounter a system that needs special configuration the configuration is replaced with a specialized configuration. This works quite well. We did not really find any ready solution that would fit our purpose.
That being said, it is a PITA to get it up and running - there's a lot of knobs to turn when you need to support several operating systems. I don't think it will become a maintainance-nightmare as the dependencies are quite fixed (libraries are upgraded regularly, but we rarely introduce new one).
virtualenv is good, but it can't do magic - e.g. if you want use a library that just MUST have Python 2.4 and another one that absolutely NEEDS 2.5 instead, you're out of luck. Nor can virtualenv (or any other tool) help when there's a brand new release of an OS and half the tools &c just don't support it yet, as you mentioned for Snow Leopard: some problems are just impossible to solve (two libraries with absolutely conflicting needs within the same build), others just require patience (until all tools you need are ported to the new OS's release, you just need to stick with the previous OS release).

How to preserve build environment during product lifecycle

What are best practices in recording build/test machine(s) setup during the life time of a project? If you need to provide a patch for previous version of your product, you likely need to reload the same compiler and support tools to re-issue the patched release. What and how do you record? The obvious things are:the OS version and patch level, compiler/IDE version and patch level 3rd party tools/libraries.
My first thought is to keep a log file of all the requirements. This log file would go into your VCS.
VMWare Virtualization(or other similar products) are ideal for this type of thing. Build an entire development/build/ or test environment, and leave it setup just for that purpose. You can take the image off-line, back it up to a DVD and simply turn it back on when you need it.
I'm using maven for java with the enforcerer plugin so all of these things are stored in my project object model, even the version of maven itself which is required. As long as I manage to get the proper version from version control I'm home free.
3rd party tools and libraries go in version control along with everything else; we have a libs tree that goes under our VCS trunk right next to our app tree, so it gets included with any branches or tags that we create. The one wrinkle I haven't yet solved is Windows tools and libraries that require their own installers instead of running out of whatever directory VCS gives them.
For OS and compiler, I'd recommend creating a VM for each release if you can't install multiple compiler versions in parallel. Then your project wiki can document which VM and which compiler version to use for a given build. This isn't automatic like your log file would be, but it provides a ready-to-go environment (instead of potentially having to reinstall a machine to match your log file). Some projects check their entire compiler into version control, but this seems overkill to me (and doesn't play well with IDEs and compilers that need their own installers).
We don't track patch levels for the OS and compiler. I realize that it's possible that a patch would break or change something, but the chance seems so low that the cost-benefit ratio just isn't there.