Im running a selenium-grid with several chrome instances. The selenium grid are 2 machines(windows) with several nodes. The tests are executed from another machine which connects to the grid. To be able to use the features of remote debugging, i need to connect from the executing machine(which can read the sessions host and the drivers debugUrl) to the other machines and finally the chrome instances.
But chrome rejects anything else than localhost.
I can only find solutions, where people tunnel or port forwarding, which is perhaps ok, when there is only a single instance. In a grid i don't have static ports or static rules to provide static forwarding.
In my scenario the grid is build up automated and not an ever running system.
Has anybody a hint how to solve this?
Since i found a solution by myself i want to share. I will only post parts of code to give the hints and not the full code since its to much work here, but for an experienced developer this should be enough.
To be able to address the right browser and access its remote-debug websocket i implemented a custom servlet for my nodes.
First the servlet:
public class DebugServlet extends RegistryBasedServlet
being registered through the node.json like
"servlets" :["com.....ui.util.DebugServlet"],
To access the node(on the right machine) i ask the selenium session for it like:
"http://" + hubHost + ":" + hubPort + "/grid/api/testsession?session=" + sessionId
where the "sessionid" can be retrieved from chromedriver.
From the returned json we can extract the node info of the session, here we need the url.
url = JSONUtil.get(response.getBody(), "proxyId")
No we can call the servlet of the correct host and give in the websocket url for the browser and whatever data is needed. In my example to add a default network-header for BasicAuth.
url+ "/extra/DebugServlet"
with the header in java(can also be parameters or other http provided possibilities)
new BasicHeader("BrowserUrl", webSocketDebuggerUrl), new BasicHeader("Name", name),
new BasicHeader("Value", value)
In the servlet we extract now the data and open a websocket to the browser with the given url and make our calls.
In the servlet:
public static final String networkDebugging = "{\"id\": 1,\"method\": \"Network.enable\",\"params\": {\"maxTotalBufferSize\": 10000000,\"maxResourceBufferSize\": 5000000 }}";
public static final String addHeader = "{\"id\": 2,\"method\": \"Network.setExtraHTTPHeaders\",\"params\": { \"headers\": {\"${key}\": \"${value}\"}}}";
ws.connect();
ws.setAutoFlush(true);
ws.sendText(networkDebugging);
String payload = TemplateUtil.replace(addHeader, name, value);
ws.sendText(payload);
Related
I have problem with opening dialog using Primefaces dialog framework. We are using SSO solution to provide security to our application by integrating with internal company SSO solution.
In short.
Our real address (without sso) to application on our server is e.g.: https://appserver1.net/ctx/page.xhtml (where ctx is root context of our app)
In normal case we get sso address e.g.: https://ssoaddress.net/junction/page.xhtml
where junction=ctx. During request sso address is rewritten to find real address of our server, get resources and response again rewritting to sso url address. Everything works fine. But we got second env (DEV02) on which due to some limitation we got sso addres where junction!=ctx like: https://ssoaddress.net/junction/ctx/page.xhtml. In that case when i am trying to open dialog i got information: "page.xhtml Not Found in ExternalContext as a Resource".
Working code when junction=ctx:
public void openTestPage() {
Map<String,Object> options = new HashMap<String, Object>();
options.put("resizable", false);
options.put("draggable", true);
options.put("modal", true);
options.put("height", 250);
options.put("contentHeight", "100%");
options.put("closable", true);
RequestContext.getCurrentInstance().openDialog("/pages/page", options, null);
}
Due to fact that junction is different than context during rewriting is not possible to find requested page.html. Maybe someone of you knows how to solve this problem? I add that i cannot rewrite context of application.
Technical info: primefaces 6.0, JSF2.2, weblogic 12.2.1.
structure of resources: src/main/webapp/pages/page.xhtml
Since you can't fix bad url rewriting due to some limitation, you're left with fixing it with another rewriting.
You may put in a separate proxy between your server and sso, that does the rewriting.
Or you may rewrite right in your app. You may create your own rewriting servlet filter or use a 3rd party solution, e.g. PrettyFaces.
We have been directly using U2F on our auth web app with the hostname as our app ID (https://auth.company.com) and that's working fine. However, we'd like to be able to authenticate with the auth server from other apps (and hostnames, e.g. https://customer.app.com) that communicate with the auth server via HTTP API.
I can generate the sign requests and what-not through API calls and return them to the client apps, but it fails server-side (auth server) because the app ID doesn't validate (clients are using their own hostnames as app ID). This is understandable, but how should I handle this? I've read about facets but I cannot get it to work at all.
The client app JS is like:
var registerRequests = // ...
var signRequests = // ...
u2f.register('http://localhost:3000/facets', registerRequests, signRequests, function(registerResponse) {
if (registerResponse.errorCode) {
return alert("Registration error: " + registerResponse.errorCode);
}
// etc.
});
This gives me an Error code 5 (timeout error) after a while. I don't see any request to /facets . Is there a way around this or am I barking up the wrong tree (or a different forest)?
————
Okay, so after a few hours of researching this; I'm pretty sure this fiendish bit of the Firefox U2F plugin is the source of some of my woes:
if (u.scheme == "http")
if (url2str(u, true) == url2str(ou, true))
return resolve(challenge);
else
return reject("Not matching appID");
https://github.com/prefiks/u2f4moz/blob/master/ext/appIdValidator.js#L106-L110
It's essentially saying, if the appID's scheme is http, only allow it if it's exactly the same as the page's host (it goes on to do the behaviour for fetching the trusted facets JSON but only for https).
Still not sure if I'm on the right track though in how I'm trying to design this.
I didn't need to worry about facets for my particular situation. In the end I just pass the client app hostname through to the Auth server via the secure API interface and it uses that as the App ID. Seems to work okay so far.
The issue I was having with facets was due to using http in dev and the Firefox U2F plugin not permitting that with JSON facets.
I have a RESTful Web API that is running properly as I can test it with Fiddler. I see calls going through, I see responses coming back.
I am developing a tablet application that needs to use the Web API in order to fetch data or make updates in the repository.
My calls do not return and there is not a single trace in the Fiddler to show that my calls even reach the server.
The first call I need to make is to login. The URI would be this:
http://localhost:53060/api/user
This call would normally return some information about the user (such as group membership, level of authorization and so on). The Web API uses Windows Authentication, so the repository is able to resolve all these fields based on the credentials passed in. As I said, in Fiddler I see the three calls made to the URI as the authentication is negotiated between the caller and the server. The third call returns with a JSON object that contains all information generated from the repository as expected.
Now, moving to my client I have the following:
var webApiClient = new HttpClient(new HttpClientHandler()
{
UseDefaultCredentials = true
})
{
BaseAddress = new Uri("http://localhost:53060/")
};
webApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = await webApiClient.GetAsync("api/user");
var userLoginInfo = await response.Content.ReadAsAsync<UserLoginInformation>();
My call to "GetAsync" never returns and, like I said, I see no trace of it in Fiddler.
Any idea of what I'm doing wrong?
Changing the URL where the Web API was exposed seemed to have fixed the problem. Thanks to #Nkosi for the suggestion.
For anyone stumbling onto this question and asking themselves how to change the URL of the Web API, there are two ways. If the simulator is running on the same machine with the Web API, the change has to be made in the "applicationhost.config" file for IIS Express. You can locate this file by right-clicking on the IIS Express icon in the Notification Area (the bottom right corner) and selecting show all websites. Highlight the desired Web API and it will show where the application host configuration file is located. In there, one needs to locate the following section:
<bindings>
<binding protocol="http" bindingInformation="*:53060:localhost" />
</bindings>
and replace the "localhost" name with the IP address of the machine where the Web API is running.
However, this approach will not work once you start testing your tablet app with a real device. IIS Express must be coerced into exposing the Web API to the outside world. I found an excellent node.js package that can help with that. It is called IISExpress-proxy.
HttpClient 4.3.x issue.
There does not seem to be a way to attach a default host on CloseableHttpClient for 4.3.x.
This is frustrating as it requires all of your request builders to know up front all the host info, rather than just building up the request parts specific to the call and letting the client fill in any left out defaults (eg. like a default host, port, etc).
With 4.2.x and previous, you could set the default host on the client and any request just needs a subpath + parameters.
But with 4.3.x you have confusing layers of setRoutePlanner(x) (which could have proxy settings) and setProxy(x) (which could be overridden by route-planner) and I'm confused how they settle with the actual client instance. And debugging it shows that route-planner will not get used for default_host, and the 4.3.2 version actually expects the deprecated ClientPNames.DEFAULT_HOST to be set (for case with null target host) which is maybe a defect.
I am finding apache httpclient to be going off a deep edge with all these changes.
Also the examples do not fully clarify http client usage unfortunately.
As an aside: the new design is such mud, why not just have setDefaultHost(x) ? and clear up the confusion on proxy layering(s).
Unless I'm missing something, how does one set the default host in http client 4.3.x?
Why do you think they changed and decided to make everything up front in the request objects vs. defaults in the client?
This how one can provide a default target host using a custom route planner
HttpRoutePlanner routePlanner = new DefaultRoutePlanner(DefaultSchemePortResolver.INSTANCE) {
#Override
public HttpRoute determineRoute(
final HttpHost target,
final HttpRequest request,
final HttpContext context) throws HttpException {
return super.determineRoute(
target != null ? target : new HttpHost("some.default.host", 80),
request, context);
}
};
CloseableHttpClient client = HttpClients.custom()
.setRoutePlanner(routePlanner)
.build();
I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.