How to pass/parse server data to objects in WatchKit table rows? - json

My tutorial is a WhatsApp/SnapChat app. Naturally the avatar image, country flag, user name, gender symbol and conversation data all come from the server and host app.
These kinds of apps do not use Parse like APIs or other 3rd party dependencies because they use REST/JSON with their own servers.
How do I get this same data and UI elements onto the watch table row? Do we have to re-write the same HTTP GET methods in our watch extension as well as re-copy UI elements into watchOS cassettes folder? Can we not just call the same methods that already exist in the iOS host app? I'm not sure how the Connectivity Framework would be used.
Could you please give an example of GET and POST methods to assign an avatar or username to the watch table row view object? For example for a Node.js server.

Since your iOS host app has already downloaded and deserialized the data, it doesn't make any sense for the watch to duplicate that code or effort and GET the same data.
As for providing an example, you should show what you tried in code, and explain the specific problem you're having.
Documentation
You should use the Watch Connectivity framework to share data between your iOS and watchOS apps.
You'll find a good introduction in the watchOS 2 Transition Guide. See Communicating with Your Companion iOS App for details.
Apple also provides Lister sample code which demonstrates how to use WCSession to transfer both application context and files between iOS and watchOS.
Since the host app is written in Obj-C should WatchConnectivity / WCSessionDelegate be imported into every file header file that contains data that needs to be sent to the watch extension?
WCSession is a singleton that you configure at launch time, early in the life of both your iOS app and watch extension. See the transition guide's Activating the Session Object for more information.
If you don't understand how or where your apps should handle watch connectivity, there are plenty of tutorials and sample projects which you can easily find via Google.
So based on what you said I just need to use the Connectivity Framework. sendMessageToWatch and didReceiveMessage methods.
The exact methods you use depend on what you want to transfer -- application context, user info, files, or messages -- and whether it takes place in the foreground or background. See the transition guide's Choosing the Right Communication Option for more information.

If you check some chat app projects that are already on GitHub you will see how to use Connectivity Framework precisely. Obj-C and Swift.
Here's one that specifically shows you how to pass messages back and forth.
https://github.com/carbamide/MessagingTest
This is not my code. As you can see code is almost the same.
ViewController.swift
override func awakeWithContext(context: AnyObject?) {
super.awakeWithContext(context)
if (WCSession.isSupported()) {
let session = WCSession.defaultSession()
session.delegate = self
session.activateSession()
}
}
func session(session: WCSession, didReceiveApplicationContext applicationContext: [String : AnyObject]) {
print(applicationContext)
let okButton = UIAlertAction(title: "OK", style: .Default, handler: nil)
let alert = UIAlertController(title: "Application Context Received", message: applicationContext.description, preferredStyle: .Alert)
alert.addAction(okButton)
self.presentViewController(alert, animated: true, completion: nil)
}
watchOS InterfaceController.swift
func session(session: WCSession, didReceiveApplicationContext applicationContext: [String : AnyObject]) {
print(applicationContext)
let okButton = WKAlertAction(title: "OK", style: WKAlertActionStyle.Default, handler: { () -> Void in })
self.presentAlertControllerWithTitle("Application Context Received", message: applicationContext.description, preferredStyle: .Alert, actions: [okButton])
}

Related

Using Chrome DevTools Protocol Input.dispatchKeyEvent or Input.dispatchMouseEvent to send an event

I'm writing a DSL that will interact with a page via Google Chrome's Remote Debugging API.
The INPUT domain (link here:
https://chromedevtools.github.io/devtools-protocol/1-2/Input/) lists two functions that can be used for sending events: Input.dispatchKeyEvent and Input.dispatchMouseEvent.
I can't seem to figure out how to specify the target element as there is no link between the two functions and DOM.NodeId, or an intermediate API that accepts a DOM.NodeId which then returns an X,Y co-ordinate.
I know that it's possible to use Selenium, but I'm interested in doing directly using WebSockets.
Any help is appreciated.
Brief Intro
I'm currently working on a NodeJS interaction library to work with Chrome Headless via the Remote Debugging Protocol. The idea is to integrate it into my colleague's testing framework to eventually replace the usage of PhantomJS, which is no longer being supported.
Evaluating JavaScript
I'm just experimenting with things currently, but I have a way of evaluating JavaScript on the page, for example, to click on element via a selector reference. It should in theory work for anything assuming my implementation isn't flawed.
let evaluateOnPage: function (fn) {
let args = [...arguments].slice(1).map(a => {
return JSON.stringify(a);
});
let evaluationStr = `
(function() {
let fn = ${String(fn)};
return fn.apply(null, [${args}]);
})()`;
return Runtime.evaluate({expression: evaluationStr});
}
}
The code above will accept a function and any number of arguments. It will turn the arguments into strings, so they are serializable. It then evaluates an IIFE on the page, which calls the function passed in with the arguments.
Example Usage
let selector = '.mySelector';
let result = evaluateOnPage(selector => {
return document.querySelector(selector).click();
}, selector);
The result of Runtime.evaluate is a promise, which when is fulfilled, you can check the result object for a type to determine success or failure. For example, subtype may be node or error.
I hope this may be of some use to you.
this protocol is probably not the best if you are wanting to click on specific elements rather than clicking on spots on the screen...
It's important to keep in mind that this area of the devtools protocol is intended to emulate the raw input. If you want to try and figure out the position of the elements using the protocol or by running some javascript in the page you could do that, however it might be better to use something like target.dispatchEvent() with MouseEvent and inject the javascript into the page instead.

How to find out the availability status of a Web API from a Windows Store application

I have a Line-of-Business (LoB) Windows 8.1 Store application I developed for a client. The client side-loads it on several Windows 10 tablets. They use it in an environment where WiFi is spotty at best and they would like to get some sort of notification inside the app, regardless of what page they are on, notification that will let them know that they've lost connectivity to the network. I have created a method on my Web API that is not hitting the repository (database). Instead, it quickly returns some static information regarding my Web API, such as version, date and time of the invocation and some trademark stuff that I'm required to return. I thought of calling this method at precise intervals of time and when there's no response, assume that the Web API connectivity is lost. In my main page, the first one displayed when the application is started, I have the following stuff in the constructor of my view model:
_webApiStatusTimer = new DispatcherTimer();
_webApiStatusTimer.Tick += OnCheckWebApiStatusEvent;
_webApiStatusTimer.Interval = new TimeSpan(0, 0, 30);
_webApiStatusTimer.Start();
Then, the event handler is implemented like this:
private async void OnCheckWebApiStatusEvent(object sender, object e)
{
// stop the timer
_webApiStatusTimer.Stop();
// refresh the search
var webApiInfo = await _webApiClient.GetWebApiInfo();
// add all returned records in the list
if (webApiInfo == null)
{
var messageDialog = new MessageDialog(#"The application has lost connection with the back-end Web API!");
await messageDialog.ShowAsync();
// restart the timer
_webApiStatusTimer.Start();
}
}
When the Web API connection is lost, I get a nice popup message that informs me that the Web API is no longer available. The problem I have is that after a while, especially if I navigate away from the first page but not necessary, I get an UnauthorizedAccessException in my application.
I use the DispatcherTimer since my understanding is that this is compatible with
UI threads, but obviously, I still do something wrong. Anyone cares to set me on the right path?
Also, if you did something similar and found a much better approach, I'd love to hear about your solution.
Thanks in advance,
Eddie
First, If you are using Windows Store Apps, then you could possibly use a Background task to check poll for the status of the web api instead of putting this responsibility on your view model, its not the viewmodels concern
Second, if you are connecting from your Windows store app to your API then one successful authentication/ authorization for the first time, how and where do you store the token (assuming you are using token authentication). If you are (and ideally you should), is there a timer that you start which is set to the token expiration time? Is your local storage getting flushed somehow and loosing the aurthorization data?
Need more information.

An internal error occurred in the Places API library

I'm working with GoogleMaps an GooglePlaces API but I always obtain de same error.
"The operation couldn’t be completed. An internal error occurred in the Places API library. If you believe this error represents a bug, please file a report using the instructions on our community and support page (https://developers.google.com/places/support)"
I've tried to run pod try GoogleMaps and when I launch de project any map is loaded, only a tableView with different options.
This is my code, I only want to obtain the user position:
-(IBAction)getCurrentPlace:(UIButton *)sender {
[placesClient currentPlaceWithCallback:^(GMSPlaceLikelihoodList *placeLikelihoodList, NSError *error){
if (error != nil) {
NSLog(#"Pick Place error %#", [error localizedDescription]);
return;
}
self.nameLabel.text = #"No current place";
self.addressLabel.text = #"";
if (placeLikelihoodList != nil) {
GMSPlace *place = [[[placeLikelihoodList likelihoods] firstObject] place];
if (place != nil) {
self.nameLabel.text = place.name;
self.addressLabel.text = [[place.formattedAddress componentsSeparatedByString:#", "]
componentsJoinedByString:#"\n"];
}
}
}];
}
I have imported mi APIKEY in didFinishLaunchingWithOptions
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
[GMSServices provideAPIKey:#"MY_APIKEY"];
return YES;
}
And this is my Podfile
platform :ios, '7.0'
source 'https://github.com/CocoaPods/Specs.git'
target ‘googlePlace’ do
pod 'GoogleMaps'
pod 'GooglePlaces'
end
Finally, my apikey is configured to iOS application in Google Console.
What's the problem??
This looks like your API key is either invalid or out of quota.
Can you check that you are passing the correct API key to GMSPlacesClient.provideAPIKey(_:).
If you are sure that you are passing the correct API key check the Google API Console (https://console.developers.google.com/) to ensure you have not run over your daily quota limit.
This issue can be tracked here: https://issuetracker.google.com/issues/35830792
Answer my own question
https://developers.google.com/places/migrate-to-v2?hl=es-419
This fixed the problem
Migrating to Google Places API for iOS, version 2
With the version 2 release of the Google Maps SDK for iOS, the Google Places API for iOS has been split from the Google Maps SDK for iOS and is now distributed as a seperate CocoaPod.
Take the following steps to update your existing apps:
Update your Podfile to reference the GooglePlaces CocoaPod in addition to the GoogleMaps CocoaPod. If you are not using the Google Maps SDK for iOS, you can remove GoogleMaps.
If you are using the place picker, update your Podfile to reference the GooglePlacePicker CocoaPod in addition to GooglePlaces.
Rename GoogleMaps to GooglePlaces in all imports where you are using the Places API.
Specify your API key using GMSPlacesClient.provideAPIKey(:) instead of GMSServices.provideAPIKey(:).
Get the required open source license text using GMSPlacesClient.openSourceLicenseInfo() as well as GMSServices.openSourceLicenseInfo() if you are using the Google Maps SDK for iOS or the Place Picker.
After doing 8 to 12 hours R&D Found below solution. I am sure it will work.
Just add the key in AppDelegate.swift file, not any other class. It will solve your problem.
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
GMSPlacesClient.provideAPIKey("KEY")
GMSServices.provideAPIKey("KEY")
}
Note: It is compulsory to initialize GMSPlacesClient in AppDelegate Because it will take some time for initialization.
2018 answer, well in my new job, we're having a hard time about this. My own Places key works but not my new company's Places key. After a few hours, I remember the time when I was setting my own keys, I HAD TO REFRESH or REGENERATE NEW KEYS for the Places API before I got it to work. Now back to my current situation, I told this to my colleagues and boom, we just had to refresh the key, and it worked! :)

VimeoUpload not re-authenticating After Deletion of App Access on Vimeo.com

I was able to connect and upload videos using the library but when I deleted the app connection on Vimeo.com (as a test) the app didn't authorize again.
the upload looks like it's working but nothing is uploaded as the app is no longer connected.
I deleted the app on the phone and restarted but it still won't re-authorize the app.
This comes up in the output:
Vimeo upload state : Executing
Vimeo upload state : Finished
Invalid http status code for download task.
And this is in OldVimeoUpload.swift: ( didn't include the actual access code!)
import Foundation
class OldVimeoUpload: VimeoUpload
{
static var VIMEO_ACCESS_TOKEN :String! // = "there's a string of numbers here"
static let sharedInstance = OldVimeoUpload(backgroundSessionIdentifier: "") { () -> String? in
return VIMEO_ACCESS_TOKEN // See README for details on how to obtain and OAuth token
}
// MARK: - Initialization
override init(backgroundSessionIdentifier: String, authTokenBlock: AuthTokenBlock)
{
super.init(backgroundSessionIdentifier: backgroundSessionIdentifier, authTokenBlock: authTokenBlock)
}
}
It looks like the access token number is commented out. I deleted the 2 forward slashes to see if that would fix it but it didn't.
I spoke too soon.
It sounds like you went to developer.vimeo.com and created an auth token. Used it to upload videos. And then went back to developer.vimeo.com and deleted the auth token.
The app / VimeoUpload will not automatically re-authenticated in this situation. You've killed the token and the app cannot request a new one for you. You'll need to create a new auth token and plug it into the app.
If this is not accurate and you're describing a different issue let us know.
If you inspect the error that's thrown from the failing request I'm guessing you'll see it's a 401 unauthorized related to using an invalid token.
Edit:
Disconnecting your app (as described in your comment below) has the same effect as deleting your auth token from developer.vimeo.com.
Also, VimeoUpload accepts a hardcoded auth token (as you see from the README and your code sample). It will not automatically re-authenticate, probably ever.
If you'd like to handle authentication in your app check out VimeoNetworking or VIMNetworking. Either of those libraries can be used to create a variety of authentication flows / scenarios. Still, if a logged in user disconnects or deletes their token, you will need them to deliberately re-authenticate (i.e. you will need to build that flow yourself). In that case, the user has explicitly stated that they don't want the app to be able to access information on their behalf. It would go against our security contract with them to automatically re-authenticate somehow.
Does that make sense?

How to run code in an iOS app from a UI test in Xcode 7?

Is there a way to run code in the app from a UI test in Xcode 7? This is possible with application tests (since the tests run in the app), but there doesn't appear to be a simple way with UI tests.
Has anyone figured out a workaround?
The most straight forward way to run code in the app you are executing your app from UI tests is to supply launchArguments via XCUIApplication.
ui test code
import XCTest
class UITestUITests: XCTestCase {
override func setUp() {
super.setUp()
let app = XCUIApplication()
app.launchArguments += ["-anargument", "false","-anotherargument","true"]
app.launch()
}
}
app code
func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {
print("All arguments: \(NSProcessInfo.processInfo().arguments)\n\n")
print("anargument: \(NSUserDefaults.standardUserDefaults().boolForKey("anargument"))")
print("anotherargument: \(NSUserDefaults.standardUserDefaults().boolForKey("anotherargument"))")
return true
}
app output when launched from ui test:
All arguments: ["/...../AnApp.app/UITest", "-anargument", "false", "-anotherargument", "true"]
anargument: false
anotherargument: true
UI Testing runs in a separate process from your app. There is currently, as of Xcode 7.1.1, no way to interact directly with the production app's code from the framework.
Every interaction must route through accessibility. This means that you could wire up a button that executed code in your app, then have the tests call that button. This is obviously not scalable and I would recommend against it.
Maybe there is another way to achieve your goals? What exactly are you trying to accomplish?
I'm thinking of passing an environment variable to the app when testing which launches an embedded HTTP server. Then I can communicate with the app through the server. Crazy, right? And yet, I can't believe nobody has done this yet.
My biggest concern with this approach is that the embedded server will be in the production app. I'm not sure if that is a problem, or if there's a simple way to only include it when running UI tests.