How to allow Xowa run in web browser(offline) - mediawiki

I would like to setup Xowa(offline Wikipedia) in browser, but I would like to know how do I able to do so? I've google but doesn't seems to find any way to do so(or maybe I missed it).I able to import everything in my local and it works but is there any chance I can setup and run it in browser?

I able to run the Xowa in browser now by run
java -jar /xowa/xowa_linux.jar --app_mode http_server --http_server_port 8080
then open it browser however it seems like it only allow one user at one time and not allow multiple user to use it at the same time.

I manage to do it in Kiwix instead of XOWA and work without issue even if multiple users uses it in the same time, is just that you have to download the dictionary based on your need and update the dictionary if there are any new updates.

Related

localStorage same index name in another app

Im' developing an app with ionic/cordova and have used the localStorage for many times.
For example I have such a thing:
window.localStorage['is_user_paid'] = 1;
So, if user pay the money, I set this localStorage item.
Now, if another app set this to 1, and run in the device, does my app assume the user is really paid? Is it necessary to use an app key like 2afjx8y_is_user_paid ? Any idea?
When running under Cordova, localStorage is sandboxed to your app; no other apps can see the content within your app's localStorage, nor can they change the contents. Likewise, your app can only see its own localStorage contents.
Now, sandboxed does not mean not readable/editable by the user, however, which is why it is vitally important not to store things like passwords in localStorage -- the file itself is mostly human-readable and easily accessible by your end user. However, apps are prevented from accessing any localStorage other than their own.
Note: there are ways around this when apps from the same company need to share data, but they involve a different storage mechanism.)
I got around this once upon a time by creating a unique identifier within my app (stored not in the code but in the datastore), and I would use it whenever accessing local storage.
The code is predictable enough, the logic is what's important:
Create a value in your datastore (or a file that isn't readable via a URL, like in the GAE that might be an app.yaml file, or whatever) that you use as unique ID. You can do this by hand if you have to. Generate a GUID of some kind and just store it. Don't put it in your dev code or hardcode it into a JS page, make sure it's off to the side (unless you don't care, but you probably should).
Whenever you access local storage, either to get or put, run it through a function that retrieves that info (or already retrieved it as part of bootstrapping the app, whatever works for your context), and just prepend it to whatever you're calling your key.
That way you can continue coding as if you're just using an easy to understand key, like 'user_name', but the stored/retrieved key will look like "abd12342Baa345324w3423sdfs323DD_user_name".
From time to time, if so inclined, you can change that key, set up your code such that if it retrieves 'user_name' with the old key, you swap it out for the new one and continue your ops as usual.
I did this at work for an app in production and all around it was considered a legit way to go about it. I got the approach from a GAE article that shows how to store and retrieve client tokens for Google Login without putting them in your code; you can even store different versions of that UID for dev/qa/prod and whatever else. It's not specific to GAE, the concept should pan out to any environment.
Of course, if another developer on that project decides to use that same function and same GUID, then the problem just moves. A little discipline can clean that up though, I put in a comment above that util function and we never have a problem.

How can I share my current Chrome profile with Selenium?

I'd like to use Selenium alongside my current Chrome profile, which may or may not be in use. I'd like to be able to launch some Selenium automation that is aware of (for example) any currently set cookies from my current Chrome session. I'd also like my Selenium automation to be able to change cookies that will still persist in my local profile.
Example:
I'd like to be able to manually log into a website (without
Selenium)
I'd like to then launch some Selenium automation that
assumes I'm already logged in (which I would be)
I'd like to then make some type of change through the Selenium automation
I'd like to close out the Selenium automation and see the changes that were made reflected in my original, manually-initiated, session
I know this can technically be achieved by setting user-data-dir in ChromeOptions, however that results in the following errors:
[20644:39092:1124/205239:ERROR:cache_util_win.cc(20)] Unable to move the cache: 0
[20644:39092:1124/205239:ERROR:cache_util.cc(134)] Unable to move cache folder C:\Users\****\AppData\Local\Google\Chrome\User Data\Default\ShaderCache\GPUCache to C:\Users\****\AppData\Local\Google\Chrome\User Data\Default\ShaderCache\old_GPUCache_000
[20644:39092:1124/205239:ERROR:cache_creator.cc(134)] Unable to create cache
[20644:39092:1124/205239:ERROR:shader_disk_cache.cc(585)] Shader Cache Creation failed: -2
Have you reviewed the permissions of the directories listed in the error message? They should share the same group and have write permissions.
This should solve the messages, but your implementation looks to be correct.

Automatically copy text from a web page

There is a vpn that keeps changing their password. I have an autologin, but obviously the vpn connection drops every time that they change the password, and I have to manually copy and paste the new password into the credentials file.
http://www.vpnbook.com/freevpn
This is annoying. I realise that the vpn probably wants people not to be able to do this, but it's not against the ToS and not illegal, so work with me here!
I need a way to automatically generate a file which has nothing in it except
username
password
on separate lines, just like the one above. Downloading the entire page as a text file automatically (I can do that) will therefore not work. OpenVPN will not understand the credentials file unless it is purely and simply
username
password
and nothing more.
So, any ideas?
This kind of thing is done ideally via an API that vpnbook provides. Then a script can much more easily access the information and store it in a text file.
Barring that, and looks like vpnbook doesn't have an API, you'll have to use a technique called Web Scraping.
To automate this via "Web Scraping", you'll need to write a script that does the following:
First, login to vpnbook.com with your credentials
Then navigate to the page that has the credentials
Then traverse the structure of the page (called the DOM) to find the info you want
Finally, save out this info to a text file.
I typically do web scraping with Ruby and the mechanize library. The first example in the Mechanize examples page shows how to visit the google homepage, perform a search for "Hello World", and then grab the links in the results one at time printing it out. This is similar to what you are trying to do except instead of printing it out you would want to write it to a text file. (Google for writing a text file with Ruby)":
require 'rubygems'
require 'mechanize'
a = Mechanize.new { |agent|
agent.user_agent_alias = 'Mac Safari'
}
a.get('http://google.com/') do |page|
search_result = page.form_with(:id => 'gbqf') do |search|
search.q = 'Hello world'
end.submit
search_result.links.each do |link|
puts link.text
end
end
To run this on your computer you would need to:
a. Install ruby
b. Save this in a file called scrape.rb
c. call it by using the command line "ruby scrape.rb"
OSX comes with an older ruby that would work for this. Check out the ruby site for instructions on how to install it or get it working for your OS.
Before using a gem like mechanize you need to install it:
gem install mechanize
(this depends on Rubygems being installed, which I think typically comes with Ruby).
If you're new to programming this might sound like a big project, but you'll have an amazing tool in your toolbox for the future, where you'll feel like you can pretty much "do anything" you need to, and not rely on other developers to have happened to have built the software you need.
Note: for sites that rely on javascript, mechanize wont work - you can use Capybara+PhantomJS to run an actual browser that can run javascript from Ruby.
Note 2: Its possible that you don't actually have to go through the motions of (a) going to the login page (2) "filling in your info", (3) clicking on "Login", etc. Depending how their authentication works, you may be able to go directly to the page that displays info you need and just provide your credentials directly to that page using either basic auth or other means. You'll have to look at how their auth system works and do some trial and error for this. The most straightforward, most likely to work approach is to just to what a real user would do...login through the login page.
Update
After writing all this, I came across the vpnbook-utils library (during a search for "vpnbook api") which I think does what you need:
...With this little tool you can generate OpenVPN config files for the free VPN provider vpnbook.com...
...it also extracts the ever changing credentials from the vpnbook.com website...
looks like with one command line:
vpnbook config
you can automatically grab the credentials and write them into a config file.
Good luck! I still recommend you learn ruby :)
You don't even need to parse the content. Just string search for the second occurrence of Username:, cut everything before that, use sed to find the content between the next two occurrences of <strong> and </strong>. You can use curl or wget -qO- to get the website's content.

Can Google Chrome be made to auto reload after network outage in kiosk scenario?

I have an unattended touch screen kiosk application which needs to be able to automatically reload the browser home page after a network outage has occurred. At the moment the browser will display an "Unable to connect to the internet" error and will wait for a manual reload to be carried out before proceeding. Can this be automated?
I've searched for plugins and have found some plugins which deal with auto-reload but they don't seem to work in this context. I am guessing that the plugin is only active when a page is loaded so in this case with an error condition, perhaps the plugin is not active.
One alternative might be to override the error page which is displayed by Chrome but I don't know if this is possible. I could then instantiate a Javascript timer to try a reload every n seconds for example. Is this possible?
I saw a suggestion to use frames to allow the outer frame (which is never refreshed) to keep trying the loading of an inner frame but I'm not keen to use frames unless there is no alternative. I also saw a suggestion to use AJAX calls to check if the network was working before attempting a page load but this seems overkill if there is a way to correct the error only when it has occurred rather than pre-empt an error for every page load.
Host system is Windows 7 by the way. I'm keen to keep the browser running if possible rather than kill and create a new browser process.
If you don't want to tackle chrome extension development, you could wrap your site in an iframe, and then periodically refresh the iframe from the parent frame. That way you don't need to worry about OS issues.
if the content were loaded from ajax from the start then the it could simply output a custom message on the page as it does a check via AJAX. Probably prevention over remedy is always recommended
Assuming linux, you could create an ifup script to simply relaunch the browser with something like
#!/bin/sh
killall google-chrome
DISPLAY=:0 google-chrome
On debian/ubuntu, edit /etc/network/interfaces to include a post-up line; Google ifupdown for other distros.
On windows, you'd do roughly the same with a PowerShell script.
If you really want the precise behaviour you describe (without restarting the whole browser), I suggest you develop a plugin/extension: http://code.google.com/chrome/extensions/getstarted.html
I know you are using Chrome, but in Firefox this is trivial by overriding the netError.xhtml page to do a setTimeout(location.reload, 10000);.

How to configure Netbeans code entry point when you use mod-rewriting

I am developing a website in PHP and I am using mod-rewrite rules. I want to use the Netbeans Run Configuration (under project properties) to set code entry points that looks like http://project/news or http://project/user/12
It seems Netbeans have a problem with this and needs an entry point to a physical file like http://project/user.php?id=12
Has anyone found a good way to work around this?
I see your question is a bit old, but since it has no answer, I will give you one.
What I did to solve the problem, was to give netbeans what it wants in terms of a valid physical file, but provide my controller (index.php in this case) with the 'data' to act correctly. I pass this data using a query parameter. Using your example of project being the web site domain and user/12 as the URL, use the following in the NetBeans Run Configuration and arguments boxes. netbeans does not need the ? as it inserts that automatically, see the complete url below the input boxes
Project URL: http://project
Index File: index.php *(put your controller name here)*
Arguments: url=user/12
http://project/index.php?url=user/12
Then in your controller (index.php in this example), test for the url query param and if it exists parse it instead of the actual Server Request, as you would do normally.
I also do not want the above URL to be publically accessible. So, by using an IS_DEVELOPER define, which is true only for configured developer IP addresses, I can control who has access that special url.
If you are trying to debug specific pages, alternatively, you can set the NetBeans run configuration to:
http://project/
and debug your project, but you must run through your home page once and since the debugger is now active, just navigate to http://project/user/12 in your browser and NetBeans will debug at that entry point. I found passing through my home page every time a pain, so I use the technique above.
Hopefully that provides enough insight to work with your project. It has worked good for me and if you need more detail, just ask.
EDIT: Also, one can make the Run Configuration Project URL the complete url http://project/user/12 and leave the Index File and Arguments blank and that works too without any special code in controller. (tested in NetBeans 7.1). I think I will start using this method.