Apply with Linkedin Button does not work with Chrome 80? - google-chrome

We have a problem with Linkedin button that we have implemented on our job portal that users can apply directly with their Linkedin account. With the last Chrome Update (80) , users can not login linkedin with script generated button therefore It is not possible to apply with linkedin button. I guess it happens because the script from Linkedin has no samesite attribute. Info: https://blog.heroku.com/chrome-changes-samesite-cookie Are you familiar with this Problem? Can you please help me with the problem?
https://s5.gifyu.com/images/Chrome80ApplyWithLinkedin.gif

The error message there states that your instance of Chrome is not yet blocking the cookies, so it's unlikely that the SameSite attribute is the cause of the issue here. However, you can verify this by taking the following steps.
Check your browser at https://samesite-sandbox.glitch.me - if all the rows are green with ✅ then your browser is using the new behaviour.
You can toggle this behaviour on and off via chrome://flags. Enter the following in your location bar:
chrome://flags/#same-site-by-default-cookies
chrome://flags/#cookies-without-same-site-must-be-secure
Set both to "Disabled".
Try your site in an incognito window so that you can be sure there are no old cookies set. If you still receive the error, then this is highly unlikely to be related to the SameSite attribute.
I don't know how long ago your initial implementation was, but it does look as if there were some updates to the API at the end of 2018: https://learn.microsoft.com/en-us/linkedin/talent/apply-with-linkedin

Related

Changing Chrome extension site access has no effect

Chrome extensions have a "Details" page (for example, here's Adblock's). On this page you can change the permissions of the extension to be one of: On click, On specific sites, or On all sites. These options determine whether the extension runs all the time on every site or only when you want them to.
This extension, however, does not change behavior when I modify the permissions from its Details page. Why is this the case? Here's their source code on Github.
I noticed that they use a persistent background page. Would that cause the issue?
Thanks in advance.
The correct answer as stated in the comments:
[the extension] doesn't actually run on sites other than Reddit. It just takes an URL of a tab and runs it through Reddit.
This means that changing the permissions in the Details page doesn't affect anything.

IFrame on Safari 11 behaving weirldy

In a very similar fashion with a related question, the web application I have creates an iFrame to a login form, with a certain URL and a bunch of GET parameters, but until the said URL is opened in a separate window at least once, the page loaded in the iFrame doesn't seem to be getting the GET parameters at all. This is not a PHP application, however, so there's no session_start issue as suggested in the answer to the related question.
I tried tracing the network with Charles, and the only outgoing request I see is a CONNECT request to the domain of the URL without any GET payload.
Not sure if related or important: the main page domain is HTTP, the login form is on HTTPS.
Is there some preflight voodoo that needs to be done for this to work?
The whole solution works as is on Safari 10 and other browsers, IE included.
The problem was with the Preferences -> Privacy -> Prevent cross-site tracking being set to on. When switched to off works like a charm.

How subscribe pushManager via iframe in Chrome?

I can register service worker via iframe. When I try to run "pushManager.subscribe" I have:
DOMException: Registration failed - permission denied
This problem is only in Chrome via iframe. It works good in Firefox. And it works good without iframe in Chrome
You cannot use an iframe, it isn't allowed.
The permission request must be performed from the top level window.
The only alternative (that we have used for Pushpad Express for example) is to redirect to / open a new window from the iframe, then ask permission from the top level window and finally redirect back.
This is meant to make it clear for the user which website is asking the permission for push notifications. Otherwise the fear is that an ad for example may show a prompt for push notifications and that would be misleading.
BTW I had also suggested to add a new value to the sandbox attribute of iframes in order to allow prompts for push notifications, but the spec currently doesn't support it.
Long story short, you can't!
The path you are registering service worker on must be opened (be a top level domain) for it to register a service worker.
One way to achieve this is:
suppose your iframe looks like this:
<iframe src="https://example.com"></iframe>
use postMessage to communicate with iframe and ask for permission and then window.open("https://example.com/") to register the service-worker and fetch the token.
Hope this helps :)
Chrome requires you to subscribe to push from a top level domain since otherwise it is less clear what origing is the user allowing push to.

Chrome says my website contains malware?

Chrome saying while I am accessing my site, after searching I cleaned my code from the site but chrome still showing then I removed all files from my site and just upload index.html (blank file) but warning is still showing.
Chrome warnings will be based on black-lists which record where malware has been found in a site or domain, this isn't a live "scan" and does not necessarily mean that malware is on that page or at that specific time. It is not clear from your question if you've created a new folder and index.html and you are also seeing a malware warning when browsing to that URL, or if you've replaced your site content with an empty folder and index.html and that warning is still showing. Once you have taken the steps to disinfect the site then you can request a review which should help remove the warning http://support.google.com/webmasters/bin/answer.py?hl=en&answer=163633.
The malware warning should be taken seriously even if you are confident in your own site content as crackers use automatic toolkits to find vulnerabilities in websites and inject code into them to infect visitors, as these kits are largely automatic there isn't the protection in obscurity you might otherwise assume.
If you've not been able to find and fix the issue Chrome is warning about, you owe it to your visitors- and your own reputation- to take the site content down until you can resolve the problem.
Google Chrome's malware blacklists should be based on same data used by Google's safebrowsing advisory. You can access this information for a particular site (e.g. stackoverflow.com) via the following url:
http://www.google.com/safebrowsing/diagnostic?site=stackoverflow.com
Just replace the domain with your own and it should give you some indication why your site generated malware warnings in Chrome.
1.In the top-right corner of the browser window, click the Chrome menu Chrome menu.
Select Settings.
Click Show advanced settings.
Under "Privacy," uncheck the box "Protect you and your device from dangerous sites."

Is it possible to see the data of a post request in Firefox or Chrome?

How can I intercept the post data a page is sending in FF or Chrome via configuration, extension or code? (Code part makes this programming related. ;)
I currently use Wireshark/Ethereal for this, but it's a bit difficult to use.
You could just use the Chrome Developer Tools, if you only need to track requests.
Activate them with Ctrl+Shift+I and select the Network tab.
This works also when Chrome talks HTTPS with another server (and unless you have the HTTPS private key you cannot use Wireshark to sniff that traffic).
(I copied this answer from this related query.)
With Firefox you can use the Network tab (Ctrl+Shift+E or Command+Option+E). The sub-tab "Params" shows the submitted form data.
Reference: https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor/request_details#Params
Alternatively, in the console (Ctrl+Shift+K or Command+Option+K) right click on the big pane and check "Log Request and Response Bodies". Then when the form is submitted, a line with POST <url> will appear. Click on it; it will open a new window with the form data.
As of the time of originally writing this reply, both methods messed up newlines in textarea fields. The former deleted them, the latter converted them to blanks. I haven't checked with a newer version.
Do you have control of the browser POSTing the data?
If you do, then just use Firebug. It's got a lot of usefull features, including this
For Firefox there is also TamperData, and even more powerful and cross-browser is Fiddler.
Programatically, you can do this with dBug - it's a small code module you can integrate into any website.
I use it with CodeIgniter and it works perfectly.
In network tab of Web Developer tools in firefox right click on the PUT, POST or any type of request, you will find "Use as Fetch in Console" option. Here we can seed the data we are passing.
Do the respective steps sequentially.