I am trying to build a MacOS Objective C app using XCode 12.3 on McOS 10.15 to obtain an image from a scanner. I followed the instructions provided by Apple in 2008 in https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.363.9131&rep=rep1&type=pdf and dragged ImageKitDeviceBrowserView and ImageKitScannerDeviceView from the Object Library onto a window. However, connecting them by control-dragging from BrowserView onto device view only moves the BrowserView. No connection is established.
Ctrl-dragging only sets up constraints between the two objects.
In an example application (GIMP Scanner plugin) the ScannerDevice view has as its delegate Outlet Scanner Device View and the BrowserView has a Referencing Outlet of delegate connected to Device Browser view, but I cannot seem to make this connection.
Can anyone tell me how to do this?
Found that if I drag ImageKitDeviceBrowserView and ImageKitScannerDevice outside the desired window, I can ctrl-drag from BrowserView to ScannerDevice and create the required connection. I could then cut and paste the two objects into my window.
Related
I want to transfer information from my website to my electron program by using a link that has some data in it (like myprogram://data). But can't seem to find any info on the internet about this. Any help would be gladly appreciated.
Thanks!
You need to register your app as a protocol handler using app.setAsDefaultProtocolClient
app.setAsDefaultProtocolClient("myprogram")
On Windows, when the "myprogram://data" link is clicked, a new instance of your application will be launched and the arguments will be included in process.argv
Use app.requestSingleInstanceLock if you don't want multiple instances of your app to be running
On macOS, you can get the arguments using the open-url event
I am using STM32f429 discovery. I am using USB PORT in FS MODE.
I want to use two devices: one is a pen drive and the other is a keyboard. When the pen drive is plugged in, the host works as msc_host_device and when the keyboard is plugged in, the host works as hid_host_device on the same USB port.
Using a separate library, both devices are working but now I want to combine them.
How can I do this?
Check
Projects/STM32469I-Discovery/Applications/USB_Host/DynamicSwitch_Standalone
in STM32CubeF4, it does exactly what you are trying. As far as I understand it, the basic idea is
calling USBH_RegisterClass() after USBH_Init() for each device class the application can handle
when the USB callback function is called with HOST_USER_CLASS_ACTIVE, the device class becomes available from USBH_GetActiveClass()
Visionary scenario and current goal
My visionary scenario is to remotely control an non-jailbroken iDevice as lag-free as possible.
My current goal is to execute a tap on an iDevice from within an OSX application. For example: A button in a cocoa application which when clicked taps the middle of the screen on a lightning-connected iDevice.
I am not bound to OSX and am open to other avenues.
Approach
XCUITest in the XCTest framework allows to run automatic UI Tests. It is the native way of executing remote taps on iDevices.
The following line would execute a tap in the middle of the screen:
XCUIApplication().coordinateWithNormalizedOffset(CGVectorMake(0.5, 0.5)).tap()
Cheat Sheet for XCUITest: http://masilotti.com/ui-testing-cheat-sheet/
Unofficial Reference: http://masilotti.com/xctest-documentation/
Question
How can I use the XCUITest framework from within an OSX application to remotely tap a connected iDevice? I don't actually want to UI Test an existing application.
My problems start with #import XCTest which is not allowed without a test target and continue with .tap() (iOS) not being available in my cocoa application. How do integrate all this?
Other avenues
What other way should I possibly use instead? It must be possible to execute taps on a connected iDevice remotely, because Appium and Calabash use the now deprecated UIAutomation framework to do so. Both must switch to XCUITest from iOS10 onwards.
Edit 1 - Current status
It seems like my approach is much too complicated and basically means implementing Appium-light. My current approach is to use the Appium Server which handles UIAutomation (and in the future XCUITest). I then implement my own Client to send HTTP requests to the Appium REST-API.
We have a SPA web application that we're trying to convert into a WinJS project as a native Windows Store app. For most part, the Javascript is working except for DOM manipulations deemed unsafe.
One thing that does not appear apparent is, how can the start page of the app (e.g. index.html) be supplied with query string and hash parameters? Our site main page is designed to behave differently based on parameters.
e.g. index.html?contextId=xxxxx#enviroment=xxxxx
I tried adjusting the value in package.appxmanifest to no avail. It will throw errors on query strings, and hash parameters will silently not persist.
UPDATE: Project background
A brief about what our app does, and then why the above naive desire won't work and the answer below how we went about this issue.
Our web app is a highly-dynamic data-driven application that completely relies on data to figure out what to render. Therefore the ?contextId=xxxxx parameter is so crucial as it tells our system to load the data which further informs what kinds of visual components to load and it goes on recursively to form wildly different UIs.
We were looking to therefore find some means to supply these parameters like traditional command-line parameters to the same executable to produce different UIs. And thus different "apps" by mere changes in those parameters. Like a "config transform" mechanism for web.config in ASP.NET web projects, that would be most welcome.
However further testing showed it is not possible; a single Windows store app project has a GUID that is supplied into the packaged app bundle. Building the same project multiple times with different "build config" would just mean overwriting a previous installation since they are the same app with increasing version numbers. The answer details how we went about this.
Windows Store apps don't work with URI parameters when launched from their primary tile. In that case, you should make sure that the app defaults to suitable values, e.g., if you were thinking to supply defaults in the manifest, then default to those in the app's activation handler for the ActivationKind.launch case when eventObject.detail.arguments is empty.
There are two other ways to launch an app that can supply other arguments.
First is to launch via a secondary tile. When you create the tile from the app (which is subject to user consent), you supply the launch arguments. In your activation handler, for ActivationKind.launch, those args will be in the eventObject.detail.arguments property.
Second is to launch the app through a URI association. You use a custom schema for this, which is declared in the manifest. The app will then see ActivationKind.protocol and eventObject.detail.uri will contain the full URI including any parameters. A URI launch can be done from another app, by entering the URI into a browser address bar, or through a shortcut that a user could configure on the Start screen.
The first step is to convert our Windows (8.1) Store project into a Universal app structure, which would then spin off a separate Windows Phone WinJS project (this is nice when we wish to target Windows Phone later) and a shared project.
Practically everything from the Windows Store project is moved to the shared project (including default.html or index.html). What remains in the Windows Store project is a customised config.js carrying the parameters
window.customWin8 = {
contextId: xxxxxxxxxx,
customParam: 'xxxxxxxxxx'
};
The downstream modules that sense for query string/hash parameters would then fall back to this alternative object if it exists to pick up the data it needs.
Now, for every differing app we wish to deploy, that would for now seem to require a separate Windows Store project so it gets its own GUID and won't conflict with other apps. All these projects would reference the very same shared project thanks to the Universal structure Visual Studio affords. The only down side is it seems Visual Studio 2013 does not have a direct UI method to make this referencing to the share project and has to be hand code into the jsproj file.
<Import Project="..\Common.Shared\Common.Shared.projitems" Label="Shared" />
With this adjustment they can all build and package with their isolated "build config".
I have a custom google-chromium application (based on X11/Gtk+) which I am running on Ubuntu 13.x. Now what I want is when the system starts up i don't want to load Ubuntu window manager, instead I am starting Ubunut in text mode in console. When It starts in console mode, I want to run my custom google-chromium application. The application should run in 1080P. So here is the sequence.
Start Ubuntu in console mode.
login and start X server. (startx)
Once X server is launched I run google-chrome. (with the help of .xsession)
Everything is working and I am able to start my google-chrome application but there is one problem. My google-chrome application is not on full screen. I have tried geometry=1920x1080 --maximize but nothing is working and it is showing itself in the top - left corner.
As per GTK docs screen sizes are managed by Window Manager, (http://www.gtk.org/api/2.6/gtk/gtk-x11.html), which I am not running.
Question is, since I am not running any window manager how can I tell google-chrome application to run on the full screen.
Thanks.
Regards,
Farrukh Arshad.
what is called "full screen" under X11 is really a client message sent from the application to the window manager, which will then resize the window and hide the window frame; if there is no window manager, there is nothing to honour the policy. even the geometry request goes through the window manager: the toolkit can but ask.
the question is: are you modifying the Chromium code base for your application, or are you just launching the application itself? if you have access to the windowing system code you can get the screen size and set the window geometry yourself; see the GdkScreen API:
https://developer.gnome.org/gdk2/stable/GdkScreen.html
I would still suggest you run a small window manager; running without one degrades the functionality of any application. you can use a simple one, like twm:
http://en.wikipedia.org/wiki/Twm
or a slightly more complex, and yet very plain one, like Metacity:
https://wiki.gnome.org/Projects/Metacity