Remote method is getting called twice using EasyXDM interface (for cross domain IFrame communication) method when there are 2 channels - easyxdm

I have got 2 IFrames on a host Page and wanted to set up the bi-directional channel between Host page and IFrames. For that I used easyXDM Interface class and was able to set up the communication between the host page and iFrames.
Host page is on one domain and all the IFrames are on the different domain but the all the 3 IFrames are on the same domain.
I had set up 2 channels on the host page using easyXDM interface class and specified the required properties like what local methods , remote methods etc..
Host page has got local method called publish, and this publish method is remote on all the IFrames.
The problem I am getting is that when the publish method is called from one IFrame , the publish is called for all the IFrames channel on the host page.
Code on the Host page looks like :
channel1 = new easyXDM.Interface(
{
local: "/name.html",
remote: "PathToIFrame1.html",
container: document.getElelmentById('Div1')
}
, {
remote: {
receive: {}
},
local: {
publish: { method: function () { } }
}
}
);
channel2 = new easyXDM.Interface(
{
local: "/name.html",
remote: "PathToIFrame2.html",
container: document.getElelmentById('Div2')
}
, {
remote: {
receive: {}
},
local: {
publish: { method: function () { } }
}
}
);
and code on IFrame side looks like :
remote = new easyXDM.Interface({},
{
remote: {
publish: {
isVoid: true
}
},
local: {
Receive: {
isVoid: true,
method: function (a, b) {
}
}
}
});
}
and when publish method is called from IFrame side , publish methods of channel1 and channel2 (on the host side ) both gets called.
Can someone please suggest what could be wrong here.

Related

Laravelsubdomain - 404 on both main and sub domains

I am working on a project which needs a subdomain and a main domain.
So I wrote some changes on the boot function of RouteServiceProvider, as mentioned below. It's working fine on localhost, but on the server, both main and subdomain return 404.
public function boot()
{
$this->configureRateLimiting();
$this->routes(function () {
Route::prefix('api')
->middleware('api')
->namespace($this->namespace)
->group(base_path('routes/api.php'));
// Main domain routes.
Route::domain(config('constants.app_url'))
->middleware('web')
->namespace($this->namespace)
->group(base_path('routes/web.php'));
// SubDomain Routes.
Route::domain('users.'.config('constants.app_url'))
->middleware('web')
->namespace($this->namespace)
->group(base_path('routes/web_user.php'));
});
}

How to record http requests with Google Chrome extension and persist them

I want to create a Chrome extension, that records HTTP requests (to a pre-defined host) and persists them as a list in local storage so when I call a particular website again the list will be extended.
I want to go with Manifest v3 to make the extension "ready for the future". I created a background script to trigger the request that currently puts all the details into local storage like that (currently this is redundant for demonstration purposes, I also tried it seperated):
chrome.webRequest.onBeforeRequest.addListener(details => {
var urls = [];
chrome.storage.local.get(['data'], function(data){
urls = data.urls;
});
chrome.scripting.executeScript(
{
target: {tabId: details.tabId},
func: recordClick,
args: [details, urls]
},
() => {
urls.push(details);
console.log(urls.length);
chrome.storage.local.set({urls: urls});
});
}, {
urls: ['<all_urls>']
});
There's another function called recordClick() that does the same as in the callback:
function recordClick(details, urls) {
urls.push(details.url);
chrome.storage.local.set({urls: urls});
}
I tried several ways on where to load and save the result but none of them work. When I load the previous urls within the onBeforeRequest trigger, urls is not global and not known within the callback. When I put it outside the trigger definition, it's not reading the storage in realtime. I also tried to load the urls in a content script, loaded at "Document start". I tried to load the urls in the backend script at the top, and so on.
Seems like I have a timing problem: The trigger always loads an empty list or the variable is not global. I'm not able to extend the list. No matter where I put the storage functions.
Is my plan feasable at all? What am I'm doing wrong?
thanks!
Since chrome.storage.local.get is asynchronous, you should move chrome.scripting.executeScript into the callback of it.
onComplete may be suitable for your purpose, instead of onBeforeRequest.
chrome.webRequest.onBeforeRequest.addListener(details => {
chrome.storage.local.get('urls', function(data){
let urls = [];
if( data.urls ) {
urls = data.urls;
}
urls.push(details);
chrome.storage.local.set({urls: urls}, function() {
console.log('Value is set to ');
console.log(urls);
});
chrome.scripting.executeScript( {
target: {tabId: details.tabId},
func: function(details, urls){ console.log("executed script") },
args: [details, urls]
},
() => {
console.log("injected")
});
});
},
{ urls: ['<all_urls>'] }
);

GWT HTML export from LibGdx shows : GwtApplication: exception: (TypeError)

I have an Android game developed with LibGdx version 1.9.9, which I am trying to export in HTML. I am using GWT (V-2.8.2). The game is running well in Android and doesn't have any issues. While exporting the game by running this command ./gradlew html:dist I am not getting any errors.
But when I am placing the exported library into the localhost and trying to access the game, first the default loader is appearing and then there is a blank screen with this error message:
GwtApplication: exception: (TypeError) : null is not an object (evaluating 'null.zY')
(TypeError) : null is not an object (evaluating 'null.zY')
This is happening in every browser - Safari, Chrome, Firefox.
The stack trace doesn't show any significant place of debug.
Any idea of what is the problem? Thanks.
HTML Gradle:
gwt {
gwtVersion='2.8.0' // Should match the gwt version used for building the gwt backend
maxHeapSize="2G" // Default 256m is not enough for gwt compiler. GWT is HUNGRY
minHeapSize="1G"
src = files(file("src/")) // Needs to be in front of "modules" below.
modules 'com.package.gamename.GdxDefinition'
devModules 'com.package.gamename.GdxDefinitionSuperdev'
project.webAppDirName = 'webapp'
compiler {
strict = true;
disableCastChecking = true;
}
}
import org.wisepersist.gradle.plugins.gwt.GwtSuperDev
def HttpFileServer server = null
def httpFilePort = 8080
task startHttpServer () {
dependsOn draftCompileGwt
String output = project.buildDir.path + "/gwt/draftOut"
doLast {
copy {
from "webapp"
into output
}
copy {
from "war"
into output
}
server = new SimpleHttpFileServerFactory().start(new File(output), httpFilePort)
println "Server started in directory " + server.getContentRoot() + ", http://localhost:" + server.getPort()
}
}
task superDev (type: GwtSuperDev) {
dependsOn startHttpServer
doFirst {
gwt.modules = gwt.devModules
}
}
task dist(dependsOn: [clean, compileGwt]) {
doLast {
file("build/dist").mkdirs()
copy {
from "build/gwt/out"
into "build/dist"
}
copy {
from "webapp"
into "build/dist"
}
copy {
from "war"
into "build/dist"
}
}
}
task addSource {
doLast {
sourceSets.main.compileClasspath += files(project(':core').sourceSets.main.allJava.srcDirs)
}
}
tasks.compileGwt.dependsOn(addSource)
tasks.draftCompileGwt.dependsOn(addSource)
sourceCompatibility = 1.6
sourceSets.main.java.srcDirs = [ "src/" ]
eclipse.project {
name = appName + "-html"
}
Enter the superdev mode and activate source mapping and debugging and step through the source in Chrome, that's the way to find these problems.
Start with superdev parameter
Open the game's web page
Hit the arrow button at the top left corner
Hit the "compile" button
Source Maps are available in Chrome now, you get a "real" stack trace.

RequireJS text plugin: cannot load HTML from other domain

I'd like to fetch some html from another domain with require.js. I know that CORS policies doesn't allow this easily. Note: I have configured the web server (with Access-Control-Allow-Origin "*" and other directives) and require.js so far that all JS and CSS files (css with require-css plugin) gets loaded from the other domain as expected - just fetching html makes problems. But in the browser network protocol I can see that the html content even gets loaded. However, this content does not get passed to the require function! The browser gets the content, but require.js doesn't provide it as an parameter...
My configuration:
requirejs.config({
baseUrl: "http://some.other.domain/",
paths: {
jquery: 'ext/jquery/jquery.min',
htmlTemplate: 'test.html?',
siteCss: '../css/site'
},
shim: {
htmlTemplate: [
'css!siteCss'
]
},
config: {
text: {
useXhr: function (url, protocol, hostname, port) {
return true;
}
}
},
map: {
'*': {
text: 'ext/require/text',
css: 'ext/require/css.min'
}
}
});
require(['text!htmlTemplate'], function (htmlTemplate) {
console.log(htmlTemplate); // prints 'undefined' into the console
});
Two notes: The useXhr configuration is taken from require.js text plugin adds “.js” to the file name but it makes no difference if it is there or not. I appended a ? to htmlTemplate path. With this the .js does not get appended to the URL and the browser loads the html content - as said before, unfortunately, without that require.js is passing it to parameter htmlTemplate.
What can I do? I've read that if I use the require.js optimizer the generated file wouldn't have this problem anymore (however that works...). But I need to develop my JS without optimization on every edit.
Update: Found one solution but I'd be happy if anyone can provide the 'right' solution.
I've found the actual problem! This part:
config: {
text: {
useXhr: function (url, protocol, hostname, port) {
return true;
}
}
},
should really do it. However, I found out that it wasn't called at all. Instead, the default implementation was called. And this returned false.
To make it work it is necessary to have the right keys in the config section since the mapping doesn't seem to be evaluated for it.
So this is the right configuration that fetches HTML from the other domain:
requirejs.config({
baseUrl: "http://some.other.domain/",
paths: {
jquery: 'ext/jquery/jquery.min',
htmlTemplate: 'test.html', // // ---> removed the '?'
siteCss: '../css/site'
},
shim: {
htmlTemplate: [
'css!siteCss'
]
},
config: {
'ext/require/text': { // ---> full path is required!!!
useXhr: function (url, protocol, hostname, port) {
return true;
}
}
},
map: {
'*': {
text: 'ext/require/text',
css: 'ext/require/css.min'
}
}
});
require(['text!htmlTemplate'], function (htmlTemplate) {
console.log(htmlTemplate); // now prints HTML into the console!!!
});
Hallelujah!
Found the right hint here. Another option might be to set the path for text. At least the configuration must be set somehow so that the function gets called...
I think I've found a solution. Doc of requirejs/text:
So if the text plugin determines that the request for the resource is on another domain, it will try to access a ".js" version of the resource by using a script tag. Script tag GET requests are allowed across domains. The .js version of the resource should just be a script with a define() call in it that returns a string for the module value.
Because of that I changed the configuration to this, so text is not used anymore:
requirejs.config({
baseUrl: "http://some.other.domain/",
paths: {
jquery: 'ext/jquery/jquery.min',
htmlTemplate: 'test.html', // removed the '?'
siteCss: '../css/site'
},
shim: {
htmlTemplate: [
'css!siteCss'
]
},
map: {
'*': {
// removed the text plugin
css: 'ext/require/css.min'
}
}
// removed the useXhr configuration for the text plugin
});
require(['htmlTemplate'], function (htmlTemplate) {
console.log(htmlTemplate); // prints '<div>Here I am!</div>' into the console
});
Now http://some.other.domain/test.html.js gets loaded. The content of test.html is:
define(function () {
return '<div>Here I am!</div>';
});
So I surrounded the HTML with a little bit of JS - no problem to me. And now htmlTemplate is set to the expected string. It's not pure HTML anymore, but since it is a fixed template (i.e. not generated) it may be acceptable.
I am getting No 'Access-Control-Allow-Origin' header is present on the requested resource. after adding the code
config: {
'ext/require/text': { // required!!!
useXhr: function (url, protocol, hostname, port) {
return true;
}
}
},

SWFobject in a Chrome Extension - API Unavaiable

Hi!
I'm building a Chrome extension, in which I need to embed a SWFobject in the background page.
Everything works, except the JavaScript controls for the SWFobject and the eventListeners.
My guess is that it has something to do with the cross-domain policies, because while testing the page on a webserver everything worked fine.
Anyway, here's a snippet:
In the main page:
var playerView = chrome.extension.getBackgroundPage();
$('#playerPause').click(function(){
playerView.playerPause();
});
In the background:
function playerPause() {
if (postData[nowPlaying].provider == 'youtube' ) {
player.pauseVideo();
}
else if (postData[nowPlaying].provider == 'soundcloud' ) {
player.api_pause();
};
}
And the eventListeners:
soundcloud.addEventListener('onMediaEnd', playerNext);
function onYouTubePlayerReady(player) {
player.addEventListener("onStateChange", "function(state){ if(state == 0) { playerNext(); } }");
}
In the console it throws
"Uncaught TypeError: Object # has no method
'pauseVideo'"
for both the Youtube embed the Soundcloud one.
Also, the SWFobject is embedded like this (and works):
function loadTrack (id) {
if(postData[id].provider == 'youtube') {
swfobject.embedSWF(
"http://www.youtube.com/e/" + postData[id].url + "?enablejsapi=1&playerapiid=player",
"player",
"1",
"1",
"8",
null,
{
autoplay: 1
},
{
allowScriptAccess: "always"
},
{
id: "player"
}
);
}
else if(postData[id].provider == 'soundcloud') {
swfobject.embedSWF(
'http://player.soundcloud.com/player.swf',
'player',
'1',
'1',
'9.0.0',
'expressInstall.swf',
{
enable_api: true,
object_id: 'player',
url: postData[id].url,
auto_play: true
},
{
allowscriptaccess: 'always'
},
{
id: 'player',
name: 'player'
}
);
}
}
Sorry for the lengthy post, I wanted to provide as much information as possible.
Also, I know the code isn't pretty, this was only my second application ;)
Thanks a lot in advance to anyone who can help,
Giacomo
You can have a look at this extension, you can not access local connection in chrome extension, but you can run a content script as a proxy script instead.(You can serve a proxy page on gae or any other free servers)
The problem here is that you can't use inline scripts or inline event handlers in chrome extensions ever since the manifest evolved to v2.
You should have added the manifest file for me to understand what's going on better. But briefly all you CAN ever do is
In the main page,
Remove all inline scripts and move them to an external JS file.
Remove inline event listeners, move them to the same or another external JS and use
addEventListener().
But the issue is, You can't execute calls to the swf in the background page or expect it to return anything. All these will continue to give you "Uncaught TypeError" Exception.
Take the case of a webcam image capturing swf, the webcam will be streamed to the page, but the function call to it can never be made and hence the image will never be captured.
My project to scan QR codes from the addons popup met the ruins due to this.