STM32f429 HID HOST and MSC HOST How to Combine? - stm32f4discovery

I am using STM32f429 discovery. I am using USB PORT in FS MODE.
I want to use two devices: one is a pen drive and the other is a keyboard. When the pen drive is plugged in, the host works as msc_host_device and when the keyboard is plugged in, the host works as hid_host_device on the same USB port.
Using a separate library, both devices are working but now I want to combine them.
How can I do this?

Check
Projects/STM32469I-Discovery/Applications/USB_Host/DynamicSwitch_Standalone
in STM32CubeF4, it does exactly what you are trying. As far as I understand it, the basic idea is
calling USBH_RegisterClass() after USBH_Init() for each device class the application can handle
when the USB callback function is called with HOST_USER_CLASS_ACTIVE, the device class becomes available from USBH_GetActiveClass()

Related

How can I connect IKDeviceBrowserView with IKScannerDeviceView?

I am trying to build a MacOS Objective C app using XCode 12.3 on McOS 10.15 to obtain an image from a scanner. I followed the instructions provided by Apple in 2008 in https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.9131&rep=rep1&type=pdf and dragged ImageKitDeviceBrowserView and ImageKitScannerDeviceView from the Object Library onto a window. However, connecting them by control-dragging from BrowserView onto device view only moves the BrowserView. No connection is established.
Ctrl-dragging only sets up constraints between the two objects.
In an example application (GIMP Scanner plugin) the ScannerDevice view has as its delegate Outlet Scanner Device View and the BrowserView has a Referencing Outlet of delegate connected to Device Browser view, but I cannot seem to make this connection.
Can anyone tell me how to do this?
Found that if I drag ImageKitDeviceBrowserView and ImageKitScannerDevice outside the desired window, I can ctrl-drag from BrowserView to ScannerDevice and create the required connection. I could then cut and paste the two objects into my window.

How to open my electron program when a link (like myprogram://a/a) is clicked in a web browser

I want to transfer information from my website to my electron program by using a link that has some data in it (like myprogram://data). But can't seem to find any info on the internet about this. Any help would be gladly appreciated.
Thanks!
You need to register your app as a protocol handler using app.setAsDefaultProtocolClient
app.setAsDefaultProtocolClient("myprogram")
On Windows, when the "myprogram://data" link is clicked, a new instance of your application will be launched and the arguments will be included in process.argv
Use app.requestSingleInstanceLock if you don't want multiple instances of your app to be running
On macOS, you can get the arguments using the open-url event

Is there a way to combine chrome arguments with user profile?

I am trying to test around a webcam using fake stream, however, there is a check to see if the camera settings is allowed before it can work.
I am working with testcafe and my code is similar to what is below which doesn't work for the args (unless without the profile).
Using --use-fake-ui-for-media-stream and --use-file-for-fake-video-capture works for stream, however the check for camera settings still fails. I tried using a user profile which works for the camera settings but not the chrome arguments. Does anyone know how I can combine these two to work ?
chrome:userProfile --start-fullscreen --allow-insecure-localhost --use-fake-device-for-media-stream --use-fake-ui-for-media-stream --use-file-for-fake-video-capture="/path/to/video.y4m" ')
Chrome can't apply CLI flags without creating a new browser instance and can't create a new browser instance if you have other Chrome instances that use the same profile.
If you want to use "chrome:userProfile" together with CLI flags, you can close all Chrome processes on your machine. Or you can create a dedicated directory for a temporary Chrome profile and use it in tests by specifying chrome --user-data-dir=$TEMP_PROFILE_DIRECTORY as a browser.

How can XCUITest be used from within an OSX application to remote control iDevices?

Visionary scenario and current goal
My visionary scenario is to remotely control an non-jailbroken iDevice as lag-free as possible.
My current goal is to execute a tap on an iDevice from within an OSX application. For example: A button in a cocoa application which when clicked taps the middle of the screen on a lightning-connected iDevice.
I am not bound to OSX and am open to other avenues.
Approach
XCUITest in the XCTest framework allows to run automatic UI Tests. It is the native way of executing remote taps on iDevices.
The following line would execute a tap in the middle of the screen:
XCUIApplication().coordinateWithNormalizedOffset(CGVectorMake(0.5, 0.5)).tap()
Cheat Sheet for XCUITest: http://masilotti.com/ui-testing-cheat-sheet/
Unofficial Reference: http://masilotti.com/xctest-documentation/
Question
How can I use the XCUITest framework from within an OSX application to remotely tap a connected iDevice? I don't actually want to UI Test an existing application.
My problems start with #import XCTest which is not allowed without a test target and continue with .tap() (iOS) not being available in my cocoa application. How do integrate all this?
Other avenues
What other way should I possibly use instead? It must be possible to execute taps on a connected iDevice remotely, because Appium and Calabash use the now deprecated UIAutomation framework to do so. Both must switch to XCUITest from iOS10 onwards.
Edit 1 - Current status
It seems like my approach is much too complicated and basically means implementing Appium-light. My current approach is to use the Appium Server which handles UIAutomation (and in the future XCUITest). I then implement my own Client to send HTTP requests to the Appium REST-API.

Can an AIR app be programmed to handle a URL protocol?

I'm writing what is essentially a browser in Adobe AIR (ActionScript, not AJAX). A great bit of functionality to implement would be protocol handling. iTunes, for instance, handles itms protocols; when your friend sends you a link beginning with "itms://", it's going to launch iTunes as long as it's installed. Is there a way to write an AIR app (requiring AIR 2 would be fine) that can be the "handler" for a protocol in this way?
There is no way, programatically speaking, to specifically handle a particular protocol. However, there is InvokeEvent. InvokeEvent will be fired when the application is "invoked", either when it's explictly launched or if an associated file or URL is activated.
The process of associating your app with a particular file type or protocol scheme is separate and application-dependant. In iOS, for example, you would need to specify the protocol in Info.plist under CFBundleURLTypes/CFBundleURLSchemes.
Yes. You can use the URLLoader class to download data in binary form (URLLoader.BINARY) and then parse this as appropriate. See this CS3 documentation on working with external data.
http://www.patrick-heinzelmann.de/labs/lastfm/
I'm not sure exactly how it works and I don't see a way to download the app, so I can't even test it, but maybe it will help...
Check out this page. I am trying to find out the same thing, but I haven't found any solution to do it with just Air yet. Seems like you might need a custom installer to setup the correct registry entries, and a proxy application to "wash" the input to a correct format that then can start your application with the correct command line parameters. Hope this can be of any assistance.