Percy CLI - logging in? executing in order? - visual-testing

2fold question:
how do I get Percy to execute snapshots -in order-?
can you help me troubleshoot my simple login script?
Trying to set up a basic beginnings of Percy CLI. Using a .js file as they detail here - https://docs.percy.io/docs/cli-snapshot (I have spent a lot of time with this document)
My website is mostly behind a login, so being able to login in Percy and continue snapshotting is of key importance. I have created the following execute script. It sends Percy to the login screen, puts in credentials and logs in, then takes a snapshot of /home (not accessible without login)
module.exports = [ {
name: 'Login Page - Execute Login',
url: 'http://localhost:3000/login',
waitForSelector: '.login-form > button',
waitForTimeout: 3000,
execute() {
document.querySelector('#email').value = 'test#test.com';
document.querySelector('#confirmPassword').value = 'password';
document.querySelector('.login-form > button').click();
}
},
{
name: 'Home',
url: 'http://localhost:3000/home',
waitForTimeout: 2000
}
]
When I input this script line by line into Chrome console, everything works as expected.
When I run it in Percy CLI in the terminal, it executes without error:
9:50:~$ npx percy snapshot snapshots.js
[percy] Percy has started!
[percy] Snapshot taken: Home
[percy] Snapshot taken: Login Page - Execute Login
[percy] Finalized build #21: <URL removed for privacy>
But the screenshots I see show that the email and password have not been set as the values of the inputs. I do not understand how this would work in Chrome console but not in Percy Chromium headless browser.
Additionally, you'll notice that percy takes the 'Home' snapshot first, when it is listed second in the JS object.
Is there any trick to getting snapshots taken in a certain order? A different format besides exporting as a JS module?
Or, can I somehow create a login cookie for my website before running Percy so I don't have to worry about order?
Is there some SDK I should be using to make these slightly more complex interactions beyond just going to a URL and snapshotting possible?
Thanks very very much.

Related

AASA - Apple App Site Association - Not working

I have been having a long and frustrating experience trying to get AASA to work for webcredentials. My goal here is to allow usernames and passwords to be stored in the iOS keychain.
I did have this working on a root domain the other week but it is not sufficient for my scenario as I will explain. It didn't work for me straight away I have to say but it eventually started working after a clean build so I thought this was the issue then but now I am not so sure.
I am using Expo with EAS build. We have a multi-tenant application. From a single codebase we deploy to multiple apps in the store. All are on the same team ID but they are separate applications and use separate credentials, nothing is shared.
I am confident my apps textContentType of username and password on my TextFields is correct as this has not changed from when I managed to get it working originally and I have checked it countless times.
Expectation
For the "Save Password" prompt to be displayed after login. What I have noticed however is when going to store a password manually using "add password" via iCloudKeychain from the keyboard accessory this does accurately show the correct "TENANT_SUBDOMAIN.example.com". I find this confusing.
Goal Scenario
I am hosting a site on Netlify. I have it setup to support wildcard subdomains with a LetsEncrypt provisioned wildcard SSL certificate. I then have edge functions which change the content of my index.html and apple-app-site-association file dynamically based on the requested subdomain.
I have added the Associated Domains capability to my provisioning profile.
I am using the latest Expo 47 and EAS build. I have added in the appropriate associated domains configuration and I can see this when introspecting my entitlements under com.apple.developer.associated-domains and it is correct.
I am using TestFlight for testing. I am doing a --clean-build on EAS every time and I also increase the runtime version. I have also tried manually refreshing credentials outside of the build process which does this automatically. This must be using the correct provisioning profile otherwise I would get a build failure as the requested entitlements wouldn't match.
The AASA file is currently hosted just in the .well-known directory. I have tried using the root and also tried using both. There are no redirects taking place.
I am aware the AASA file is pulled on application installation and update. I repeatedly remove the apps and then reboot my phone in an attempt to reset any device caches.
The content-type of the file is application/json and I have confirmed this using developer tools in the browser.
There is no robots.txt or anything blocking the request from an infrastructure perspective. There are no additional firewalls or geo restricted access as I am just using plain Netlify to host this, nothing fancy.
I am confident the Team ID and bundle IDs are correct in the AASA file.
I remove the content-length header in the Edge function so it is correctly calculated by the network instead and I have confirmed this using curl.
When I check the file using https://app-site-association.cdn-apple.com/a/v1/site.example.com Apple has the correct file cached on it's CDN so I would expect it to work.
I added in an applinks section so I could use the Apple App Search API validation tool and the Branch.io AASA verification tool to verify correctness. Branch.io says the file is fine and Apple says it's fine also but because the App has not been deployed to the store yet I see Error no apps with domain entitlements. From what I can tell this is normal in development and makes sense as it uses the current released version of the app to verify the deep link configuration. So to me this means Apple can parse the file correctly.
When I stream my device console logs; on install I can see the AASA requesting the correct domains. I see no errors on swcd I just see the Beginning data task AASA-XXXX with the correct domains.
When I run Charles proxy on my phone with a verified SSL installation (also reinstalled a few times now) I do not see quite what I would expect - but the device logs seem to imply it is doing the correct thing. When I view the app-site-association... URL requests in Charles there is one per application install which is correct. The request is marked as Unknown and when I look at the request the host is shown but as you would expect from SSL I see no path. The info says METHOD: CONNECT with Error - Input Error: EOF. This is the only error I see, I am not sure if it is a red herring and something to do with Charles. Given the error as you expect there is no body in the request or response. It is worth noting in general testing I have no VPN enabled and I have do not have Private Relay enabled in my iOS settings.
When I perform a sysdiagnose I see the following at the timestamp in my console log in the swcutil_show.txt device log. This looks correct in comparison to other apps webcredentials and applinks services I see there and I see no errors:
Service: webcredentials
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x141816200> { v = 0, t = 0x8, u = 0x1e7c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x7c1e000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-09 14:14:32 +0000
Next Check: 2022-12-14 14:03:00 +0000
Service: applinks
App ID: MYTEAMID.com.cf.example.b2c.ios
App Version: 1.0
App PI: <LSPersistentIdentifier 0x13fd38d00> { v = 0, t = 0x8, u = 0x219c, db = 0094F7C4-3078-41A2-A33E-79D5A62C80A6, {length = 8, bytes = 0x9c21000000000000} }
Domain: CORRECT_SUBDOMAIN.example.app
Patterns: {"/":"*"}
User Approval: unspecified
Site/Fmwk Approval: approved
Flags:
Last Checked: 2022-12-13 13:13:23 +0000
Next Check: 2022-12-18 13:01:51 + 0000
At end of file:
MYTEAMID.com.cf.example.b2c.ios: 8 bytes
(This seems correct for all apps)
Other Scenario
I have tried setting this up using an apex on another domain which hasn't been seen before by Apple. I have tried using a subdomain with a root domain serving the same content and I have tried the subdomain and root domain on their own. I have also tried not using the Edge functions and having static files but to no avail.
When I do this I ensure I wait for the Apple CDN to catch up and remove/add entries prior to deleting the apps, rebooting my device, and reinstalling to test.
AASA File
AASA content comes back with the correct payload and Content-Type: application/json and Content-Length headers, both from Apples CDN and the origin. When I had this somehow working in my initial test it was on a root domain and I did not have an applinks section, this was only added so I could use the verification tools for universal links.
I am not sending back different content or duplicated content and I block the www subdomain - I have also tried it with a www subdomain for the record.
{
"applinks": {
"details": [
{
"appIDs": [
"MYTEAMID.com.cf.example.b2c.ios"
],
"components": [
{
"#": "no_universal_links",
"exclude": true,
"comment": "Matches any URL with a fragment that equals no_universal_links and instructs the system not to open it as a universal link."
}
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
I have also tried this with the older format:
{
"applinks": {
"apps": [],
"details": [
{
"appID": "MYTEAMID.com.cf.example.b2c.ios",
"paths": [
"*"
]
}
]
},
"webcredentials": {
"apps": [
"MYTEAMID.com.cf.example.b2c.ios"
]
}
}
associatedDomains iOS. expo config
associatedDomains: [
`webcredentials:${SUBDOMAIN}.example.app`,
`applinks:${SUBDOMAIN}.example.app`,
],
Help :)
I have been trying to get this to work for a long time now and I am completely out of ideas. If anybody has any suggestions I would really appreciate it. I am very confused how the devices request seems correct and the CDN content is correct but it is still not working. It's worth also reiterating that I need to have different subdomains for each tenant as the credentials must not be shared across apps so the keychain->domain association store must be different.
I am wondering if it's the LetsEncrypt wildcard SSL certificate but I wouldn't expect it to verify and for Apple to cache the file if this was the case. It seems very unlikely to me but it is the only thing I haven't tried at this point.
Many Thanks,
Mark

Autodesk Forge Configurator Inventor - Azure deployment problem

I have troubles with deploying an app to Azure.
I started with https://github.com/Autodesk-Forge/forge-configurator-inventor repo. I managed to run it locally with no errors. I am able to login, upload my own zipped files, change parameters, export pdf and download it. Everything is fine. Now i want to publish app to azure.
App is currently running so You can check it out: https://pjk-config.azurewebsites.net
WHAT IS WRONG: I cannot upload any models after login. No error is displayed. If I make change in wrench or wheel model and update it, I won't happened either.
What I did:
created azure account,
changed callback url to my app (in my situation: "https://pjk-config.azurewebsites.net/"),
I changed WebApplication.Program.cs by removing the UseKestrel() statement ( please check that)
{
webBuilder.UseStartup<Startup>();
var port = Environment.GetEnvironmentVariable("PORT");
// If deployed to a service like Heroku, need to listen on port defined in the environment, not the default one
if (!string.IsNullOrEmpty(port))
{
webBuilder.UseUrls("http://*:" + port);
Log.Logger.Information($"PORT environment variable defined to:{port}");
}
});
appsettings:
inviteonlymode - false
embedded mode - false
publisher settings: (but I see polling in output so I think something is missing)
"CompletionCheck": "Callback",
"CallbackUrlBase": "https://pjk-config.azurewebsites.net"
I deployed through VS 2019 with WebApplication right click - publish using this reference:
https://learn.microsoft.com/en-us/visualstudio/deployment/quickstart-deploy-to-azure?view=vs-2019
If you need any additional info just let me know. I am fighting with this almost 30 days by my own. I am beginner and this is my first question on this page so I apologize for lack of precise information about my problem. Just tell me what you need and I will send it over.
thank you for your effort and help. I figured out how to deploy to azure and be able to run without bugs. It was about callback. In my situation Callback URL at my apps>Autodesk Forge should be https://myapp.azurewebsites.net (no slash at the end) and in the appsettings.json i went with that:
"Publisher": {
"CompletionCheck": "Polling",
"CallbackUrlBase": "https://myapp.azurewebsites.net/"
Notice the slash at the end.
Probably the next step will be changing CompletionCheck to Callback.
App is running and I can work on inventor part.
Thanks!

Angular 6 PWA -- The PWA functionality is interefering with Azure Adal Authentication, not sure how to bypass it

I have a PWA built with Angular 6 and the #angular/pwa npm package and authenticating using adal-angular4 npm package (but could just rebuild that from scratch if needed -- the issue isn't a bug in the package I think)
When attempting to authenticate, although it does work, users are very often greeted with this message of not found (screenshot of console but its the same).
This especially seems to be the case if you are already authenticated to another (or itself) Azure AD product. Where it normally should only load for a while and then let the user in.
Service worker error transcript:
Failed to load 'link.com/#LONGTOKEN' A serviceWorker passed a promise
to FetchEvent.respondWith() that rejected with 'Error: Response not Ok
(fetchAndCacheOnce): request for LINK.com/index.html returned response 404 Not Found'.
It seems that writing a function to check for new version of the PWA has cleaned up everything. Because it's a PWA, when replacing files with a new version -- the cache will still be there and shift+reloading won't necessarily clear it, causing a lot of unwanted behaviour.
The code for the cleanup looks like this:
First, inject in the constructor the following: updates: SwUpdate
import { SwUpdate } from "#angular/service-worker"
Then, inside ngOnInit, I have the following:
updates.available.subscribe(event => {
updates.activateUpdate().then(() => document.location.reload());
})
It will force a complete refresh 2-3 seconds in if there's a new version available but all works well afterwards.

boto3 cache session token not working

Either there's something borked in my environment or this functionality is broken. It appears it worked at one point according to the blog I followed:
What I'd like to do is run my script, enter the MFA. Then be able to run it again without entering MFA making use of cached session token.
The samples I've seen are:
session = boto3.Session(profile_name='w2-cf3')
ec2_client = session.client('ec2',region_name='us-west-2')
I'm then prompted for my mfa:
Enter MFA code:
I enter it and my code runs. At this point, my session token should be cached, that's how it works in awscli. However, on the second run, instead of reading in my cached session for this profile, boto3 disregards and prompts me again for my MFA:
Enter MFA code:
Here's what my ~/.aws/config file looks like:
[profile default]
region = us-west-2
output = json
[profile w2-cf3]
region = us-west-2
source_profile = default
role_arn = arn:aws:iam::<accountid>:role/<role>
mfa_serial = arn:aws:iam::<accountid>:mfa/<user>
Here's what my ~/.aws/credentials file looks like:
[default]
aws_access_key_id=<access key>
aws_secret_access_key=<secret key>
Expected: I expected the second time I run my script is would make use of the cached session token like it does in awscli. The session token provided by AWS lasts 1 hour.
This is discussed in the GitHub repo for botocore here and a pull request has been submitted too and being discussed.
You're correct, this seems it was working back in 2014 but has been somehow removed, from the discussion on the thread mentioned above, this should be re-implemented soon, follow the pull request thread and make sure to upgrade when it is being release.

Retrieving selenium logs and screenshots from grid back in Intern

There are two parts to my question in regards to Intern workflow in case of exception:
1- Per Selenium Desired Capabilities specifications, RemoteWebDriver captures screentshots on exceptions by default (unless it is disabled by setting webdriever.remote.quiteExceptions.) Is it possible to retrieve these screenshots in Intern?
2- I have set up a Selenium Grid with multiple platforms/browsers and can execute Intern tests on the grid successfully. However I am trying to gather the logs back in my Intern environment so that I don’t have to sign on to each machine on the grid to see the logs. I am particularly interested in server, driver, and browser logs based upon selenium logging guide. I tried adding the following Intern configurations using the Selenium Desired Capabilities guide but wasn't able to get any logs:
capabilities: {
'selenium-version': '2.39.0',
'driver': 'ALL',
'webdriver.log.driver':'INFO',
'webdriver.chrome.logfile': 'C:\\intern\\logs \\chromedriver.log',
'webdriver.firefox.logfile':'C:\\intern \\logs\\firefox.log'
To get a screenshot yourself you can call remote.takeScreenshot().then(function (base64Png) {}), but there is no way that I am aware of to retrieve the automatically generated screenshots—there appears to be nothing in the WebDriver JsonWireProtocol to do so.
To retrieve logs, you can call remote.log(typeOfLog).then(function (logs) {}). See the JsonWireProtocol on log for more information on what you get back.
There is a way to capture automatically generated screenshots. Using a custom reporter (https://github.com/theintern/intern/wiki/Using-and-Writing-Reporters#custom-reporters) I was able to save a screen shot and log browser console logs into a file.
As mentioned in the link above, when the '/test/fail' topic callback is called, it passes in a test object. If the webdriver had failed internally, this object will have a 'test.error.cause.value.screen' variable present in it. This is the variable that stores the webdriver generated screenshot. So the following is what I did:
if (test.error.cause.value.screen) {
//Store this variable into a file using node's fs library
}
If you look at the error object, you will also get to see more error information that the webdriver has logged.
Regarding the browser logs, #C Snover has hit the nail on that one. But that information is only available inside the remote object. This object is available when the '/session/start' topic callback is called. So what I did is I created a map that mapped the session ID from the remote object to the remote object itself. And luckily, the test object has the session ID in it too. So, I retrieved the remote object from my map using test.sessionId as the key to the map and logged the browser logs too. So in short this is what I did:
'/session/start': function (remote) {
sessions[remote.sessionId] = { remote: remote };
},
'/test/fail': function (test) {
var remote = sessions[test.sessionId].remote;
remote._wd.log('browser', function (err, logs) {
//Store the logs array into a file using node's fs library
});
}