I want to explore the available options for building a VSTi for instruments built on the Web Audio API.
Not really - you would need to build a custom Web Audio engine host, since a VSTi would need to be able to pipe the output of the Web Audio instrument out through the VSTi interface. No one has done this, to my knowledge.
Related
Discord released a Rich Presence SDK that lets C++ developers integrate their game with Discord, which really is an extension of their existing RPC server built into Discord. Lately this SDK has been ported to more and more languages, including Java and Javascript. I know AIR can interface with Javascript, but they seem to use node.js which could be messy.
With all of these new wrappers popping up, what would be the best way of going about this for an AIR app? I'm trying to figure out the most efficient method for creating an AS3 version but I'm not sure which path is the best.
Edit: I don't even want the fancier features, I just want to be able to set a user's local Discord client to display extra game info.
First of all, I haven't started the implementation of the system I'm about to describe, as I didn't want to commit on implementing something I did not know if was possible.
So, what I'm trying to achieve is to build a chrome-app to download the audio from certain websites (e.g. youtube and soundcloud) using youtube-dl, post process it using ffmpeg and then upload it to a cloud service via some api. The reason I want to do it via a chrome-app is because I could do all the work on the client side (no need for servers) and I'd have the ability to insert javascript into the pages using content scripts, which would make the app pretty simple to use (I could create buttons such as 'download song' and stuff like that).
Although I have already read the documentation explaining the NaCl Technical Overview and some of the Application Structure, I still am not sure as to whether I would be able to make these calls via some C/C++ module or if I would get denied due to security reasons.
To summarize: considering that the user has the needed dependencies in his system (youtube-dl, python, ffmpeg and etc.), is it possible to make calls to third party APIs such as the ones described before via a chrome-app using NaCl ?
Thank you all in advance,
Chrome apps are normally sandboxed.
Less so than extensions - they can reach much more system resources via app APIs.
But still, what you mention is executing libraries / utilities out of browser, and it's not normally allowed.
(P)NaCl is tightly sandboxed in this regard. See this old question, it still applies: you can only use 3rd-party code that compiles into NaCl along with your app, not just link to a library. There are some library ports to NaCl, but it's not automatic.
Normally, a few years back you would use a mechanism like NPAPI to reach out and use a library out of browser. It's deprecated, and won't work anymore. In its place, Chrome offers a pipe-like (through stdio) connection to an external program called Native Messaging. You could use it to perform operations with system-level libraries and tools, but the downside is that you can't bundle the native host with your app, you'll need a separate installer.
I thought PhoneGap was a simple wrapper for HTML5, but it looks like it does in fact compile into native in some way.
I have a Cloud based, HTML5 Single Page Web application that I just want to run full screen, and distribute via an app store.
Should I just create a PhoneGap App with an InAppBrowser?
If you intend to publish an HTML5-based app in a "native" app store sucha s Google Play or Apple Store, you have 2 options:
1- Implement your own native application using a webview to show your web-based app.
2- Use some existing framework like Phonegap/Cordova even you do not use the native API. The framework will already setup everything for you to just deploy your HTML5 code.
However, if your web app is meant to be hosted remotely (i.e. not run from local files), you may encounter problems when trying to publish in the Apple Store. They have some strict rules about remote content, and about publishing apps that may not provide much more value and/or functionality than a simple web-app can.
Best.
I searched for Java based web application frameworks the last few days. I have to build a Java EE backend and a HTML5/CSS3/JavaScript frontend which can be accessed with multi-touch capable devices. So I will need modern JavaScript frameworks like Sencha Touch.
My backend should be built upon with Java EE, Hibernate and MySQL. I have two kinds of data transfer: AJAX / JSON so the page does not need to be reloaded and pages and normal pages which reload the page by sending a form with POST (or do you think to have more the feeling as a application I should do all stuff with AJAX/JSON?).
I found several web application frameworks:
JavaSever Faces
Apache Wicket
Spring MVC
handle it only with jar files for JSON (and REST)
Google Web Toolkit
What do you think will fit best? Perhaps you can exclude one of them, that would also be great, so I can take a closer look at the remaining technologies.
Best Regards, Tim.
Interesting question.
Concerning exclusion: If you use a JS framework like Sencha Touch in the frontend I don't see the sense in using something like GWT which is for frontend-code generation.
I would probably stick with a more lightweight framework like http://www.playframework.org/.
You get your data from the backend and then hand it via JSON over to your frontend code i.e. sencha, sproutcore, cappuccino, gwt or what you choose to use.
Let me know what you choose :-)
I read about http://rhomobile.com/ and I found this is great but I want to ask whether the application built with that would run in Browser or Natively on the device as this requires HTML and Ruby?
The application generated using Rhodes are pure Native application... And there is no need to install Ruby on devices as Rhodes will take core of it..
May be this URl is helpful :
http://itsallaboutruby.blogspot.com/2010/08/rhodes-framework.html
Abhishek
Both, actually.
Your application is a web application, but it doesn't run on the internet, it runs on a small webserver that is part of your application inside the phone. It also doesn't run in the browser, but rather in a native browser widget inside your application.
Since the webserver runs on the phone itself, it has access to all the native capabilities of the phone, so you can make HTTP calls to the webserver to capture sound from the microphone, shoot video with the camera, get the GPS location, get multitouch info and so on.