I'm new to cypress and have ran into an issue. I have my base URL set to the domain I want to test, the issue is when I want to test the ability to login on my base url site I need to verify the user on another site, once I click apply on site number 2 the page on my base url reloads and I would then be able to test the rest of the site.
When I try to visit site 2 from my test I get an error
cy.visit() failed because you are attempting to visit a URL that is of
a different origin.
The new URL is considered a different origin because the following
parts of the URL are different:
superdomain
You may only cy.visit() same-origin URLs within a single test.
I read this https://docs.cypress.io/guides/guides/web-security.html#Set-chromeWebSecurity-to-false I've tried setting "chromeWebSecurity": false in cypress.json but I still get the same issue (I'm running in chrome)
Is there something I am missing?
As a temporary but solid work around, I was able to find this script in one of the Cypress Git issue threads (I don't remember where I found it so I can't link back to it)
Add the below to your cypress commands file
Cypress.Commands.add('forceVisit', url => {
cy.window().then(win => {
return win.open(url, '_self');
});
});
and in your tests you can call
cy.forceVisit("www.google.com")
From version 9.6.0 of cypress, you can use cy.origin.
If you want to use it, you must first set the "experimentalSessionAndOrigin" record to true.
{
"experimentalSessionAndOrigin": true
}
And here's how to use it.
cy.origin('www.example.com', () => {
cy.visit('/')
})
cy.origin change the baseurl, so you can link to another external link via cy.visit('/').
You can stub the redirect from login site to base site, and assert the URL that was called.
Based on Cypress tips and tricks here is a custom command to do the stubbing.
The login page may be using one of several methods to redirect, so besides the replace(<new-url>) stub given in the tip I've added href = <new-url> and assign(<new-url>).
Stubbing command
Cypress.Commands.add('stubRedirect', () => {
cy.once('window:before:load', (win) => {
win.__location = { // set up the stub
replace: cy.stub().as('replace'),
assign: cy.stub().as('assign'),
href: null,
}
cy.stub(win.__location, 'href').set(cy.stub().as('href'))
})
cy.intercept('GET', '*.html', (req) => { // catch the page as it loads
req.continue(res => {
res.body = res.body
.replaceAll('window.location.replace', 'window.__location.replace')
.replaceAll('window.location.assign', 'window.__location.assign')
.replaceAll('window.location.href', 'window.__location.href')
})
}).as('index')
})
Test
it('checks that login page redirects to baseUrl', () => {
cy.stubRedirect()
cy.visit(<url-for-verifying-user>)
cy.wait('#index') // waiting for the window load
cy.('button').contains('Apply').click() // trigger the redirect
const alias = '#replace' // or '#assign' or '#href'
// depending on the method used to redirect
// if you don't know which, try each one
cy.get(alias)
.should('have.been.calledOnceWith', <base-url-expected-in-redirect>)
})
You can't!
But, maybe it will be possible soon. See Cypress ticket #944.
Meanwhile you can refer to my lighthearted comment in the same thread where I describe how I cope with the issue while Cypress devs are working on multi-domain support:
For everyone following this, I feel your pain! #944 (comment) really gives hope, so while we're patiently waiting, here's a workaround that I'm using to write multi-domain e2e cypress tests today. Yes, it is horrible, but I hope you will forgive me my sins. Here are the four easy steps:
Given that you can only have one cy.visit() per it, write multiple its.
Yes, your tests now depend on each other. Add cypress-fail-fast to make sure you don't even attempt to run other tests if something failed (your whole describe is a single test now, and it makes sense in this sick alternate reality).
It is very likely that you will need to pass data between your its. Remember, we're already on this crazy “wrong” path, so nothing can stop us naughty people. Just use cy.writeFile() to save your state (whatever you might need), and use cy.readFile() to restore it at the beginning of your next it.
Sue me.
All I care about at this point is that my system has tests. If cypress adds proper support for multiple domains, fantastic! I'll refactor my tests then. Until that happens, I'd have to live with horrible non-retriable tests. Better than not having proper e2e tests, right? Right?
You could set the window.location.href manually which triggers a page load, this works for me:
const url = 'http://localhost:8000';
cy.visit(url);
// second "visit"
cy.window().then(win => win.location.href = url);
You will also need to add "chromeWebSecurity": false to your cypress.json configuration.
Note: setting the window to navigate won't tell cypress to wait for the page load, you need to wait for the page to load yourself, or use timeout on get.
Related
We have 2 puppeteer suites: first "isolated" (so just the UI, without backend) and second with service(s) connected.
With the latter this works
await testPage.click('.button-class');
But it's not working in the first isolated one, so we're using:
await testPage.evaluate(() => {
const button = document.querySelector('.button-class');
button.click();
})
which works fine.
First, I thought it might have something to do with waitUntil option in goto() method, but tried all the different values and also without this option defined and the result was the same: click() doesn't work.
Also, in isolation this element is undefined and with backend it's logging some ElementHandle.
Looks like adding --no-sandbox flag to launchOptions args fixed this.
I am using a Cypress.io for end to end testing in our team, but we have a problem with function cy.visit() very often.
The website has many resources from our server (css files, js files,....) and some external resources (js files).
If you open our website, sometimes it happens that external js file is pending (browser is waiting).
Cypress during the execution of cy.visit() is probably waiting until all resources are loaded. And this is a problem. I dont need to wait for all resources, because for example this external js is for an advert and it is not important for our test.
Can i tell to Cypress something like: "After a few seconds after start loading a page you can exec this test without all resources loaded"?
I have tried onBeforeLoad combine with setTimeout and reload, but it failed :(
cy.visit('https://www.example.org', {
onBeforeLoad: (win) => {
setTimeout(function() {cy.reload(); }, 10000);
}
})
I am so crazy a I dont know what do next. Please help me and sorry for my english :) Thank you! :)
You can block unnecessary domains from loading with the blacklistHosts: [] option in your cypress.json. Just add the domain name of the advertiser (and potentially anything else you don't need, like Google Analytics) to the blacklistHosts array:
{
// the rest of your cypress.json...
"blacklistHosts": [
"cdn.my-advertiser.com"
]
}
More information about blacklistHosts is available in the docs.
I'm trying to get the SKUs available for a freemium Chrome Extension I'm developing.
I'm following all of the documentation here:
https://developer.chrome.com/webstore/payments-iap
...and I'm using the provided buy.js file, but it doesn't seem to work and the returned error messages are useless: "INVALID_RESPONSE_ERROR"
My code:
google.payments.inapp.getSkuDetails({
parameters: {env: 'prod'},
success: (r) => {
console.log(r);
},
failure: (err) => {
console.log(err);
},
});
Thoughts:
- Am I missing some permission in my manifest? I don't see any mention that it needs any additional ones.
Other StackOverflow questions have mentioned needing to proxy due to region issues. I'm in the states, shouldn't be an issue.
I've tried the above from both an options page and a popup - does it need to happen in a background page?
I'm pretty baffled. Any help is appreciated!
Thanks.
Updates:
The above works when released (in prod), but not locally
In prod you cannot buy your own thing (heads-up). It'll give you some stupid, meaningless error, but won't tell you that.
Still can't get this to work locally which means I have to test in prod.
If you need this to work locally, you must set the 'key' in your manifest.json file. When you reload it, it will show the same ID as the loaded extension from production.
Here are instructions on how to get the relevant key
If you debugging your extension in unpacked mode, you may need to set production "key" in your manifest.
I am trying to implement Push Notifications on my website (using Pushpad). Therefore I created a "manifest.json" with following content:
{
"gcm_sender_id": "my_gcm_sender_id",
"gcm_user_visible_only": true
}
of course I created a valid GCM-Account and have a sender id
I put the manifest.json into my root directory and I also added this line to my index.php:
<link rel="manifest" href="/manifest.json">
Using Firefox everything works fine and I can send and receive push notifications (so I think the manifest-include works fine), but Chrome won't work...
The console shows following error:
Uncaught (in promise) DOMException: Registration failed - manifest empty or missing
I searched Google for a long time and tried everything I found, but nothing works.
What I tried:
created the manifest.json with "Editor" and saved it as type All Types (so no hidden .txt-file) and also with UTF-8-Encoding.
restarted Chrome
cleared Chrome's cache, history, etc.
I really hope somebody can help me.
For me it was a redirect. The manifest.json must return a 200 status code (must be directly available from the server), without any redirects.
You can check the response via
wget --max-redirect=0 https://example.com/manifest.json
or
curl https://example.com/manifest.json
I faced same issue,added manifest file right after head tag . which worked for me.Cheers!
This may be an issue with your Service Worker scope. I ran into a similar problem when I rearranged my files/directories. Make sure your sw.js is on the same level as your manifest.json, otherwise the service worker won't be able to find your manifest. Try putting them both in the root of your directory. Optionally, you can specify the scope of your service worker by adding it to serviceWorker.register():
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw-test/sw.js', {scope: '/sw-test/'})
.then(function(reg) {
// registration worked
console.log('Registration succeeded. Scope is ' + reg.scope);
}).catch(function(error) {
// registration failed
console.log('Registration failed with ' + error);
});
}
Read more here:
https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers
Was wondering if your "manifest.json" is public accessible ?
If not maybe you can try to set it public accessible to see if that helps or not.
And it seems that the current chrome, when getting the "manifest.json" won't supply the cookies.
Because I didn't find an answer anywhere out there in the WWW, but managed to get it working after some time I want to provide my solution/answer for other users, who probably have the same problem:
In the file where I inlcuded the Pushpad files I wrote some PHP-Code before the <head>-Tag to include some files, e.g. for database connection. After I moved the PHP-Code below the <head>-Tag everything worked fine.
There seem to be three ways to fix this bug:
a) No redirects for "manifest.json" file.
b) Put a link to this file at the top of the tag.
c) Be sure, that there is no other manifest file before this one, cause it seems that web push script will try to import the first one and return an error due to the wrong data.
I have tried all three and finally forced Chrome to behave.
Adding the following block fixed this for me:
self.addEventListener('push', (event) => {
const title = 'Get Started With Workbox';
const options = {
body: event.data.text()
};
event.waitUntil(self.registration.showNotification(title, options));
});
I am using the offline HTML5 functionality to cache my web application.
It works fine some of the time, but there are certain circumstances where it has weird behaviour. I am trying to figure out why, and how I can fix it.
I am using Sammy, and I think that might be related.
Here is when it goes wrong,
Browse to my page http://domain/App note: I haven't included a slash after the /App
I am then redirected to http://domain/App/#/ by sammy
Everything is cached (including images)
I go offline, I am using a virtual machine for this, so I unplug the virtual network adapter
I close the browser
I reopen the browser and browse to my page http://domain/App/#/
The content is showing except for the images
Everything works fine if in step #1 I browse to http://domain/App/ including the slash.
There are some other weird states it gets into where the sammy routes are not called, so the page remains blank, but I haven't been able to reliably replicate that.
??
UPDATE: The problem is that the above steps caused problems before. It is now working when I follow the above steps, so it is hard to say what is going on exactly. I am starting from a consistent state every time because I am starting from a snapshot in a VM.
My cache manifest looks like this,
CACHE MANIFEST
javascripts/jquery-1.4.2.js
javascripts/sammy/sammy.js
javascripts/json_store.js
javascripts/sammy/plugins/sammy.template.js
stylesheets/jsonstore.css
templates/item.template
templates/item_detail.template
images/1Large.jpg
images/1Small.jpg
images/2Large.jpg
images/2Small.jpg
images/3Large.jpg
images/3Small.jpg
images/4Large.jpg
images/4Small.jpg
index.html
I'm running into a similar issue as well.
I think part of the problem is that jquery ajax is misinterpreting the response. I believe sammy is using the jquery to make the ajax calls, which is leading to the errors.
Here's a code snippet i used to test for this (though not a solution)
this.get('#/', function (context) {
var uri = 'index.html';
// what i'm trying to call
context.partial(uri, {});// fails on some browsers after initial caching
// show's that jquery's ajax is misinterpreting
// the response
$.ajax({
url:uri,
success: function(data, textStatus, jqXHR){
alert('success')
alert(data);
},
error: function(jqXHR, textStatus, errorThrown){
alert('error')
if(jqXHR.status == 0){ // this is actually a success
alert(jqXHR.responseText);
}else{
alert('error code: ' + jqXHR.status) // probably a real error
}
}
});