Is there a way to determine current SIP language on WP platform? I want to switch input control - for example TextBox to RTL when user chooses Hebrew language. I know that it is possible because native apps can do that and third party (foursquare) can do that with ease also.
These apps aren't determining anything based on the "SIP language" (and you can't actually query anything about the SIP - you can only set a scope).
What they're almost certainly doing is querying System.Globalization.CultureInfo.CurrentUICulture to determine the language that the user has set for the phone.
This is based on the assumption that this hasn't been overridden and will normally be configured on start up. Take a look at the InitializeLanguage() method in App.xaml.cs to see how this is set.
For more on adding localization to your app see Globalization and localization for Windows Phone on MSDN.
Related
I have a small device implementing a Bluetooth server and a JS page which I open on Chrome that connects and manipulate some of the device characteristics.
Since I'm still playing with the server code, I'm constantly changing/adding services and characteristics and noticed that Chrome still shows my the old ones (actually, more like an unclear mix of old and new ones). Other devices like my phone show the new characteristics.
How can I order Chrome to rediscover services and delete its cache? I've tried to just define the Service Change Characteristic and it didn't help, then tried notify the client upon connection with the values 0x0000 and 0xFFFF (assuming that would invalidate the whole range of handles) but nothing happened..
Also - what does Chrome take as the device name? (in case there are multiple "names", I refer to what's displayed in the scan window). I've tried to set the name in the "aioble.advertise()" function, and also set it in the device name characteristic (0x2A00 under the generic access service) and both didn't change the value. It's still showed as "ESP32" which I believe is some kind of default..
I was hinted that the bluetooth spec is implemented differently between Chrome/Android/iPhone/etc.. So I was hoping to get an answer of how does Chrome implement the Service Changed feature? What should I do as the server to order the client to refetch services data?
Thanks!
I looked at the Web Bluetooth Draft Community Report, last updated on 9 June 2022. Section 6.6.5. Responding to Service Changes describes how Web Bluetooth might be supposed to handle a service changed event.
But according to the Implementation Status, the Service Changed Event is not implemented on any of the platforms as of now.
You could open an issue on their Github page to get more information.
I have two system wide keyboards pre-installed on my Tizen Wearable device, the first one is a stock Samsumg's keyboard, the second one - Custom. The first one is a user's default selected in Settings.
I don't want to change the system's default, but I want my application to use the Custom keyboard.
In native API I've seen Tizen::Ui::InputConnection object that can be used as a property in Edit or TextArea controls, but I didn't see anything like this in HTML5 API. Searching Tizen's forum didn't help.
I've also seen in Tizen's SDK IME's WebHelperClient example a number of undocumented commands used to talk to a Tizen's service through a websocket. Probably there is a command to select an active keyboard, but I didn't find it.
Any leads are appreciated.
IMO that is not possible for either web apps or native apps.
Reason:
1. In gear, simultaneously two Keyboard can't be active at the same time.
Also, suppose there is an API available which you can use to change to custom keyboard while your app is running, but what if you close your application not using the normal hardware exit(i.e. Swiping down), rather you close it from "Recent Applications" then the custom keyboard that you activated for your app will be set for other applications as well.
Also the documentation available here doesn't explain anything which you are asking
https://developer.tizen.org/documentation/guides/web-application/tizen-features/ime-application
I have to build (obviously using flex) an AIR application that must be linked to a given hardware (i.e. hard-disk), so it can't be copied elsewhere..
Usually using others languages, every time you open the application, a comparison between HardDisk or CPU ID and the value stored in the application itself...
Using Flex this is not possible (as far as I know)...
User/Password check obviously won't work..
I can't use a webservice, the application needs to work offline too..
How could I do?
With AIR you can integrate native apps - either indirectly (start a process and get its result) or directly (as a library).
Any of these techniques allow you to achieve your goal the same way you would in the case you described as "other languages"... BUT BEWARE: this makes your app platform-dependent AND can potentially lead to problems since it is prone to permissions issues etc.
Another point: no such technique is 100% secure (can be circumvented AND can lead to unsatisfied legitimate users!) - so you should really consider whether this is the way to go...
Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.
For software used in a call centre guiding agents through a set script they must follow while on telephone calls, with the script branching dependant on answers to questions given - My system uses a MS Access / VBA front end (isnt web based due to speed, phone integration), 'call scripting' is coded in VBA when needed, but what if i want move to a more complete solution?
Is hosting a HTML/ms webbrowser control the obvious platform to build call scripting on?
A manager view will also be needed allowing managers to create scripts, divide it into parts, specify routing rules that determines the path through the script, link input boxes (ie question answers) back to database fields, specify validation rules as well.
Thinking about the complexities of building the manager view that translates the intended script into HTML/Javascript, is creating my own simple doc description language with tags for just the features i need and a 'rendering engine' in VBA a solution you might consider for this?
Ive thought about creating scripts out of standard Access controls, using relational table structure to hold the info of what controls relate to what parts of scripts, validation, routing options etc but i think due to Access' lack of runtime control creation this will be more painful than a rendering engine that takes a script written in my own doc desc language and displaying it.
What suggestions have you for the implementation of this?
The actual user interface requirements of the user-facing part of your system seem to be pretty minimal (ask question, get answer, branch to "next" question). I don't think there's any "obvious" platform to use. As always, a browser based system makes it easier for geographically scattered users to use a centralized system but will probably cost more in terms of development.
The manager-facing part of the application is more interesting and for that I'd probably suggest a desktop application rather than a browser based one. I can see this relying on a lot of drag-and-drop and line-drawing type functionality and that kind of stuff is still easier to do on the desktop, at least in my opinion.
Assuming that you can clearly define the kinds of question routing and decision points that a script has, and these decisions are relatively simple, I probably would write your own specification for how a survey is represented. The manager-facing application would create, edit, and save a specification and the user-facing application would read and step through one.
Given my personal skill set, I'd probably write both applications in Delphi, develop an XML format to represent the survey specification, and consider either XML or relational storage for the back end depending on what you actually wind up doing with the data.
I'm inclined to agree that a web interface is optimal for something like this, however WPF is a great alternative as well with many of the advantages of a web interface along with the power of a desktop application. Both web and WPF would give you considerable amount of control over how the application looks and feels, leveraged with all the power of the .NET framework. One drawback of a web app is that you have less ability to interact with the phone system directly, but I'm sure that's a problem that can be mitigated fairly easily with some AJAX. A big plus for the web platform option is that you'll have access to tons of client-side interactivity libraries like jQuery, which will allow you to polish the application with greater ease; with WPF you would likely find yourself paying for a lot of the fancy UI controls.
There is quite a few survey and question (test creating) systems built in access. I don’t see any real issues in using access as opposed to whatever system.
The advantage of HTML or text based systems is they tend to support a more dynamic type of screens.
On the other hand, for questions and display of text, in access a great trick is to place a sub-form control on a form, and then at runtime simply “set” what form is to be displayed in that sub-form (the source object properity). In fact, in access 2010, the nav control does exactly that and it displays forms as a sub-form.
Also for access 2010 we can create web based applications. In this video you can see that the half way mark, I switch to running the application in a standard browser.
http://www.youtube.com/watch?v=AU4mH0jPntI
However, the above means little here, as I not sure what you mean by some type of rendering engine. Each question + response is simply going to be some text on a screen, and thus you can simply display/change that text by changing the underlying reocrdset.
And, if you want nice formatted text with different fonts etc, well access 2007 now has support for rich text (markup text). So I don’t think you really need dynamic screen creating. Between changing the record source to display whatever text you want, and that of being able to display different forms (templates) on the fly by changing the sub-forms “source object”, you can well change part of your screen to display different text boxes with very little code.
On the other hand, if you have all of the .net tools and want to create a browser based application then you are free to do so. I suppose you could also wait for access 2010 to create a browser application.
If you willing to keep this simple, then access is a great choice. If you need a browser based application, then I don't access is the choice here.