MobaXterm URL Protocol Handler Usage - html

I am wanting to deploy a series of MobaXterm connections (SSH connections) to our users and would like to look at creating a webpage where the users can simply invoke a chosen session by clicking on a link.
I can see that MobaXterm supports this by installing (installed by default) the URL Protocol Handler but I do not know and cannot find anywhere any syntax for the HTML links to invoke the named sessions.
Can anyone help or point me in the right direction to look please?

I found that this :
ssh localhost
allowed me to open a session, but il also created a new 'saved session' and saturated my quota so I am not sure it is the real solution.
But a least it looks like 'mobaxterm' is the protocol, if it can help anybody ending up here. This feature sure lacks documentation :)

I e-mailed Mobatek with the same question. It turns out if you right-click the "User Sessions" node and pick "Generate HTML web page", it will make a web page containing mobaxterm: protocol links to all your existing sessions. It looks like it's designed for exactly your use case, making a webpage to share with other users, so you may be able to just use that generated page as-is.
If you do want to generate your own links, it's a little trickier. I haven't really tried to understand the encoding yet; it's definitely not designed to be particularly user-readable, but since you can make any session you want and export it to a link it shouldn't be too hard to reverse-engineer the fields you care about.

Related

Can chrome be used, from the command-line, to retrieve a URL's content to a file?

I've been driving myself mad trying to get curl, wget, the python request module, and others, to simply get me logged in to a website and pull page text there. I can certainly request HTML from the site, but only as an anonymous user. I've spent a few hours with tricks like chrome's "copy cURL" feature, but the website in question is smart enough to defend against login playbacks.
All I want is a way, from the command-line, to do something like:
chrome.exe --output_to_file page.html https://www.endpoint.com/auth_access_only.html
Essentially, I'm looking for chrome to do for me what cURL does, but I want the command-line invocation to be executed as me. I can see how this might open a potential security issue, but I don't mind at all if I have to do something magical to authorize my script. I'm not looking to do anything evil - I just want to be able to write scripts that are as "me" as I am.
I guess that, if it's truly unavoidable, I could suck it up and dust off Internet Explorer. I'd really rather not do that. I'd feel so dirty.
This is possible, but it's not as simple as you're thinking.
You can use the Chrome Debugging Protocol to remote-control Chrome.
You will need to write some code to make this work - I have done similar tasks using the chrome-remote-interface library for Node.js.
Make sure you understand what a browser profile is and where your profile folder lives.
If Chrome is already running using your browser profile: make sure it was launched with --remote-debugging-port=9002 or similar.
If Chrome is not already running using your browser profile: launch it with --user-data-dir="C:\path\to\your\profile" --remote-debugging-port=9002 or similar.
The "running or not" part is a bit tricky - you cannot launch more than one Chrome instance with the same browser profile, but you need to use this user profile because your login data is stored there. It may actually be easiest to create a separate browser profile that is just used for this automated task, and log in to the site there too.
Then, at a high level, your Node.js code will need to connect to Chrome, load the page, wait for the response, and save it to a file. Have a look at the example code for the chrome-remote-interface library - you can definitely piece together what you need from there.
Another option which uses the same underlying technology is to use puppeteer which is another tool to automate Chrome. It is designed to start from a fresh profile every time. If you do this, you'll need to script more interactions:
Visit the site's login page
Type the login credentials into the form and click the login button
Visit the site's authenticated page and save it to a file.
The benefit of this approach is that the result should be more reliable, preventing issues like expired login sessions.

Using OneNote API without registering an application?

The question is pretty clear I think, but I will elaborate on why I'm asking it.
I created a little blog engine based on OneNote. Basically, the blog configuration asks for an access to OneNote. Then the user chooses a section under which the blog posts are stored.
There is a cron script that will use all these informations to automatically get new pages, fetch the medias and cache every, and finally display the posts.
I chose OneNote because I own three Windows 8 computers and a Windows Phone, so OneNote was an easy choice, as I didn't want to get an other application to manage my blog.
There is still a lot to do (as always with softwares...), but I want to make this more or less an open source project, so that other people can install it on their websites and link it directely to OneNote.
The only "big" obstacle for this now is that authentication in the OneNote API needs to register the application on the Live Connect, and specify a redirect domain. So every user wishing to use this blog engine on their server will have to register their own application... That will look complicated just for a blog, especially if you're not tech-savvy.
Is there a way to "skip" or work around this requirement, even if it requires the user to make the section public (as it is for a blog, this doesn't seem too much to ask) ?
Thank you in advance,
Cheers
Sounds like an awesome project! When you get it released be sure to let us know at #OneNoteDev.
Unfortunately, at this time there's no way to circumvent the requirement for Live Connect OAuth configuration. You could offer a hosted variant so only you need to worry about the LiveID configuration.

Getting same information firebug can get?

This all goes back to some of my original questions of trying to "index" a webpage. I was originally trying to do it specifically in java but now I'm opening it up to any language.
Before I tried using HTML unit and other methods in java to get the information I needed but wasn't successful.
The information I need to get from a webpage I can very easily find with firebug and I was wondering if there was anyway to duplicate what firebug was doing specifically for my needs. When I open up firebug I go to the NET tab, then to the XHR tab and it shows a constantly updating page with the information the server is updating. Then when I click on the request and look at the response it has the information I need, and this is all without ever refreshing the webpage which is what I am trying to do(not to mention the variables it is outputting do not show up in the html of the webpage)
So can anyone point me in the right direction of how they would go about this?
(I will be putting this information into a mysql database which is why i added it as a tag, still dont know what language would be best to use though)
Edit: These requests on the server are somewhat random and although it shows the url that they come from when I try to visit the url in firefox it comes up trying to open something called application/jos
Jon, I am fairly certain that you are confusing several technologies here, and the simple answer is that it doesn't work like that. Firebug works specifically because it runs as part of the browser, and (as far as I am aware) runs under a more permissive set of instructions than a JavaScript script embedded in a page.
JavaScript is, for the record, different from Java.
If you are trying to log AJAX calls, your best bet is for the serverside application to log the invoking IP, useragent, cookies, and complete URI to your database on receipt. It will be far better than any clientside solution.
On a note more related to your question, it is not good practice to assume that everyone has read other questions you have posted. Generally speaking, "we" have not. "We" is in quotes because, well, you know. :) It also wouldn't hurt for you to go back and accept a few answers to questions you've asked.
So, the problem is?:
With someone else's web-page, hosted on someone else's server, you want to extract select information?
Using cURL, Python, Java, etc. is too painful because the data is continually updating via AJAX (requires a JS interpreter)?
Plain jQuery or iFrame intercepts will not work because of XSS security.
Ditto, a bookmarklet -- which has the added disadvantage of needing to be manually triggered every time.
If that's all correct, then there are 3 other approaches:
Develop a browser plugin... More difficult, but has the power to do everything in one package.
Develop a userscript. This is much easier to do and technologies such as Greasemonkey deal with the XSS problem.
Use a browser macro technology such as Chickenfoot. These all have plusses and minuses -- which I won't get into.
Using Greasemonkey:
Depending on the site, this can be quite easy.   The big drawback, if you want to record data, is that you need your own web-server and web-application. But this server can be locally hosted on an XAMPP stack, or whatever web-application technology you're comfortable with.
Sample code that intercepts a page's AJAX data is at: Using Greasemonkey and jQuery to intercept JSON/AJAX data from a page, and process it.
Note that if the target page does NOT use jQuery, the library in use (if any) usually has similar intercept capabilities. Or, listening for DOMSubtreeModified always works, too.
If you're using a library such as jQuery, you may have an option such as the jQuery ajaxSend and ajaxComplete callbacks. These could post requests to your server to log these events (being careful not to end up in an infinite loop).

Novell Identity Manager (NIM) 3.6: How to make an image part of the resources for HTML reports?

This question might not belong here, and since this is related to HTML and reporting, I thought I could ask nonetheless.
I'm currently working to list all of the user identities in Designer for Identity Manager, by Novell.
I used the User Application connector and built the required queries. I also created a Provisioning and Request Definition with the necessary fields.
I can populate an HTML table with the results of the selected query, and get the user information from the Vault. Aside, I could use the organization's logo on the top-left-hand corner, the project name on the top-right with the report title just below on the right-hand side.
My concern is to set the organization's logo on the report.
Question
How can I define this image logo as an image resource into NIM so that I can use it in my HTML identity report?
Shall it suffice to include the path of the image into the HTML document? If so, how shall I set this image path? I don't know much about HTML.
EDIT #1
I know that NIM uses Tomcat as Web Server to handle the responses. Perhaps moving the image in its directory should suffice?
If so, how can I find out what is this path where I can put the image, so that it is considered part of the Website resource?
Father Ramon, the patronizing saint of IDM, responded in this thread in the Support Forums.
It is the path to where to leave the file that is incredibly non-obvious. He says the path is:
/opt/novell/eDirectory/lib/dirxml/rules/manualtask/mt_files or
/usr/lib/dirxml/rules/manualtask/mt_files (depending on whether you are
using eDir 8.7.x or 8.8.x. IDM will automatically find it there and
attach when referenced.
But I see in another thread that some people had issues getting this to work.
You may find the Support Forums (I linked the IDM forum specifically) helpful. A lot of people monitor those on this topic.
Also, you may find Novell Cool Solutions helpful for user contributed content. (I write as geoffc there).
In the IDM space, I have written a large number of articles that may help you in general, that are list on my Wiki Page.
If all that fails, the User Application Workflow engine uses a different email engine that may be more flexible. (Thus you could call a Start Workflow for a workflow with just a single Email action in it).
Additionally in IDM 4 there is library, lib-AJC included, full of ECMA functions (Which would work perfectly well in IDM 3.6 and 3.5) that include a set of emailing functions. If you have a driver configuration from IDM 3.6.1 odds are good you will have a lib-AJC in your tree already.

In order to bypass a website's login screen, can you load a link with a username and password?

I am relatively new to web development, and I was hoping I could get some pointers about the feasibility of a feature I would like to implement. Is it possible to have a url link that you can click on, that can contain login credentials for the website it is linking to, so as to bypass that websites login screen?
In other words, can I make a link from my website to facebook, that would allow me to login right in to my facebook, from any computer? Meaning, if I don't have cookies to store my login info in, is it possible to login still?
This is just a conceptual question, so any help would be appreciated! Thanks!
One reason why this is generally avoided, is because web servers often store the query string parameters in the access logs. And normally, you wouldn't want files on your server with a long list of usernames and passwords in clear text.
In addition, a query string containing a username and password could be used with a dictionary attack to guess valid login credentials.
Apart from those issues, as long as the request is made via HTTPS, it would have been safe during the transit.
It is possible to pass parameters in the URL through a GET request on the server, but one has to understand that the request would likely be made in clear text and thus isn't likely to be secure. There was a time where I did have to program a "silent" log-in using tokens, so it can be done in enterprise applications.
You used to be able to do this, but most browsers don't allow it anymore. You would never be able to do this using facebook only something that uses browser auth (the browser pops up a username/pass dialog)
it was like this:
http://username:pass#myprotectedresource.com
What you might be able to do is whip up some javascript in a link that posts your username and password to the login page of facebook. Not sure if it will work because you might need to scrape the cookie/hidden fields from the login page itself.
It is possible for the site to block you on account of no cookies, or invalid nonce or wrong HTTP referrer, but it may work if their security is low.
While it is possible, it is up to the site (in this case Facebook) to accept these values in the query string. There are some security issues to consider certainly, and isn't done generally.
Though, there are different options out there for single sign on. This web site uses OpenID for that.