I’d like to be able to create a GUI that can be viewed over the network by a remote client.
One approach is to code the whole GUI in HTML5 and run it from a server such as Apache; the main difficulty with this is that the GUI includes at least one, sometimes two, windows containing live video streams (without any sound) and there doesn’t seem to be a good way of streaming live video into HTML5 - especially as it really needs to be live; a few seconds’ latency would be unacceptable.
Another approach (which I’ve done already, and actually works pretty well) is just to code the GUI as a desktop application (for example using Qt), and then to view the desktop remotely using VNC or Windows Remote Desktop. This gives the required responsiveness and lack-of-latency, but has the disadvantage that the whole OS desktop is accessible and not just my one application.
So, here is my question: is there a mechanism or a framework available that would enable me to use RFB (i.e. the protocol underlying VNC) or RDP (that underlying Windows Remote Desktop) to provide remote access to a single GUI application rather than a whole desktop?
When we comparing RDP and RFB the main deference is RDP only share metadata where RBF share whole frame buffer of the screen. So RBF is slow than RDP. VNC is using RFB where windows applications like Lync using RDP.
http://sandaruwmp.blogspot.com/2014/05/remote-desktop-application-with-rdp.html Here you can see a simple RDP example
Actually you can create an application that only shares a single application and also you can use many other protocols with RDP
here https://github.com/sandaru/RDAPP in this application it uses RDP with TCP that you can select only one application to show.
In this application it shares the desktop via RDP and listen to a TCP port you can send commands such as "stop selected processes", "Focus single application" and "share whole window". RDP react according to the TCP requests.
i hope this will be useful for you
NOTE: Above Source does not contain any NAT traverse mechanism.
Related
I'm new to Federation Services and I'm trying to understand how ADFS works as a whole and I've started to get down into the details. I followed along with creating an app using OIDC to authenticate a user, however, within the tutorial, they specified using a "Server Application" when setting up an Application Group. This ended up not working for me so I tried setting up a "Native Application" application group for kicks and was able to successfully login.
The thing that threw me off is, I ended up hosting ADFS on a server outside of the domain in which I had my application running, so I'm confused as to how that is "native" in terms of ADFS.
I went looking for this answer within microsoft's documentation but I didn't find the information very clear.
Native Application:
"Sometimes called a public client, this is intended to be a client app that runs on a pc or device and with which the user interacts."
Server Application:
"A web application that runs on a server and is generally accessible to users via a browser. Because it is capable of maintaining its own client 'secret' or credential, it is sometimes called a confidential client."
This may seem simple to some, but I'm trying to really get a grip on what would be used when. To me it sounds like a native application is used when you're running the application natively on a pc in which the user is also using the same pc, and the server application is run remotely in which the user would not be using the same machine. Is it really that simple or am I misunderstanding?
A native application (in Microsoft speak) is something that is not browser based e.g. mobile. The code runs client side. It may use JavaScript in which case the secret key is publicly accessible. (The secret key is one of the OAuth parameters). You use ADAL / MSAL to access it.
A server application runs server side e.g a web API. The secret key is not publicly accessible. You use OWIN to access it.
These terms have no relevance to where ADFS is actually installed. Native applications typically are not domain joined.
I sort of make shift followed this guide on how to setup remote debugging. Since I am using Adobe Animate to compile my app I assume it has done the majority of the build steps already as I get a similar screen described.
I don't understand though. Here I have port forwarding up on my router so that it goes to my PC. I have TCP port 7935 up and open. Windows firewall on or off doesn't seem to make difference. Windows firewall even prompted me to allow or deny fdb after I ran it. I can't get my phone to connect via remote debugging. I want to be able to send this to my client who is having issue with the app so I can see what's going on under the hood instead of relying on a giant sum of try/catch statements and screenshots. Any help?
I tried a dummy domain and it seems to know that it can't connect to it. When I try mine or my IPv4 it doesn't let me connect. It just freezes up the app.
I don't know whether it works or not in Animate CC, but it works via Flash Builder. I'm using Android real device and I have Android SDK tools installed on my PC
Yes, I have followed that tuts from official Adobe docs, but that doesn't work
First: Simply connect your device to your PC
Actually , you can debug your app remotely as long as your device has been connected with your PC. This step, doesn't necessarily requires FDB.
In my case , all I need was things like
adb connect 192.168.xx.xx:port
this will connect your Android device with your PC on your default network .
Second, set debug setting over network
You've done it in Animate CC, with addition you might want to check "install application on the connected device'
Third, just debug as usual
You can get all those debugging stuff including traces
I need to convert a web page requiring viewing a X window from using the VncViewer applet to some HTML5 based VNC client. The worry is NSAPI will get desupported in the near future on browsers (mainly Chrome) that disables applet functions.
I looked at noVNC and websockify and got it to work. But, here is my problem: We still have some client on IE8 that does not support Canvas. For those clients which has Java enabled and won't be changing to a higher version of IE or Chrome, we still want them to keep running the applet version. By running websockify in the wrap mode, it seems I can no longer directly connect to the VNC server (not through websockify) to keep those applet clients functional.
e.g. My command to run websockify is:
run 5903 --wrap-mode=ignore -- vncserver -geometry 1024x768 :3
After this, I tried to use the regular VNCViewer client to connect to port 5903, and it's rejected. Only the websockified page can view the VNC window. If I change the 5903 to 5902, then I can use the regular VNCViewer client to view window at 5903, however the websockified page can't view it at 5902.
Is there a hope to keep concurrent connection to my VNC server available (websockify and regular connections)?
Thank!
I would recommend starting your VNC server normally (not using websockify wrap mode). Then run websockify normally to target the VNC port. The Java client should continue to target the regular VNC port. The noVNC client should connect to the websockify listen port (which will then connect to the VNC server target).
The problem with wrap mode is that the original port is "hidden" (moved to a random high port and accessible to localhost only) and only the websocket port is exposed. But you still need the regular VNC port to be accessible for the Java client.
I'm pretty new to unix operating systems. I'm running CentOS 6.5, and I need to run 1 (or more ideally) instances of Flash Player continually in the background, I've no idea how to do this.
The reason is because in Flash I'm using the RTMFP protocol to send data between clients P2P, and it would be useful for me to have a few test clients running on my server all the time.
How would I go about doing this? The flash program needs to be visually navigated through its menus to get it into the state required. Currently I'm just using putty, what can I install to get a GUI to do this, and how might I go about getting Flash Player (10.1 up) to work?
Thanks a lot!
I think I have an idea what you're trying to do. To clarify, you want to have several flash applications running in browsers or via a flash player to act as test users to test your RTMFP protocol?
If this is the case, use VNC (something like running multiple instances of x11vnc on different ports) to log into several GUI accounts on your system and run the application (Linux is multiuser by default). You can close out the VNC without ending your session. This should work for what I think you're trying to do.
Hope this helps.
How do I run OpenERP Web 6.1 on a different machine than OpenERP server?
In 6.0 this was easy, there were 2 config files and 2 servers (server and "web client") and they communicated over TCP/IP.
I am not sure how to setup something similar for 6.1.
I was not able to find helpful documentation on this subject. Do they still communicate over TCP/IP? How do I configure the "web client" to use a different server machine? I would like to understand the new concept here.
tl;dr answer
It's meant only for debugging, but you can.
Use the openerp-web startup script that is included in the openerp-web project, which you can install from the source. There's no separate installer for it, as it's not meant for production. You can pass parameters to set the remote OpenERP server to connect to, e.g. --server-host, --server-port, etc. Use --help to see the options.
Long answer
OpenERP 6.1 comes with a series of architectural changes that allow:
running many OpenERP server processes in parallel, thanks to improved statelessness. This makes distributed deployment a breeze, and gives load-balancing/fail-over/high-availability capabilities. It also allows OpenERP to benefit from multi-processor/multi-core hardware.
deploying the web interface as a regular OpenERP module, relieving you from having to deploy and maintain two separate server processes. When it runs embedded the web client can also make direct Python calls to the server API, avoiding unnecessary RPC marshalling, for an extra performance boost.
This change is explained in greater details in this presentation, along with all the technical reasons behind it.
A standalone mode is still available for the web client with the openerp-web script provided in the openerp-web project, but it is meant for debugging purposes rather than production. It runs in mono-thread mode by default (see the --multi-thread startup parameter), in order to serialize all RPC calls and make debugging easier. In addition to being slower, this mode will also break all modules that have a web part, unless all regular OpenERP addons are also copied in the --addons-path of the web process. And even then, some will be broken because they may still partially depend on the embedded mode.
Now if you were simply looking for a distributed deployment model, stop looking: just run multiple OpenERP (server) processes with the full stack. Have a look at the presentation mentioned above to get started with Gunicorn, WSGI, etc.
Note: Due to these severe limitations and its relative uselessness (vs maintenance cost), the standalone mode for the web client has been completely removed (see rev, 3200 on launchpad) in OpenERP 7.0.