I'm using primefaces and primefaces-extensions in my application. For each and every resources like .css and .js files there's also an "ln" and "v" query parameters in the GET request for that resource, like below:
primefaces-extensions.js?ln=primefaces-extension&v=6.1
validation.js?ln=primefaces&v=6.1
As a security concern, since these parameters shows the exact version of the framework I'm using, how can I hide them?
Hiding the 'ln' is kind of useless since with a very small amount of effort, you can get the same information from the javascript files and the source of the page too ('PF() is all over the place)
The 'v' however is a slightly different issue. If you use the non-modified PF source, hiding it is sort of useless too since with very little effort (creating a hash) the possible hackers can download your sources, create a hash and compare the resulting hashes with a dictionary they can easily create of existing PrimeFaces sources and then know which version you use. So the only thing to do here is to modify the source to have it not turn up 'known or comparable' hashes by making some slight modifications (adding whitespace should already help).
But if you really want the version not to be show, you can download the PrimeFaces sources and replace the version info with some ofuscated number and build that custom version. Keep in mind that if you don't make any changes in the sources, the dictionary lookups mentioned above are still working. So it is only some minor inconvenince for hackers.
Related
Note: this is not a dupe of this or this other question. Read on: this question is specific to the Code-Sharing template.
I am doing some pretty basic experiments with NativeScript, Angular and the code sharing templates (see: #nativescript/schematics).
Now I am doing some exploration / poc work on how different "build configuration" are supported by the framework. To be clear, I am searching for a simple -and hopefully official- way to have the application use a different version of a specific file (let's call it configuration.ts) based on the current platform (web/ios/android) and environment (development/production/staging?).
Doing the first part is obviously trivial - after all that is the prime purpose of the code sharing schematics. So, different versions of the same file are identified by different extensions. This page explain things pretty simply.
What I don't get as easily is if the framework/template supports any similar convention-based rule that can be used to switch between debug/release (or even better development/staging/production) versions of a file. Think for example of a config.ts file that contains different parameters based on the environment.
I have done some research in the topic, but I was unable to find a conclusive answer:
the old and now retired documentation for the appbuilder platform mentions a (.debug. and .release.) naming convention for files. I don't think this work anymore.
other sources mention passing parameters during the call to tns build / tns run and then fetching them via webpack env variable... See here. This may work, but seems oddly convoluted
third option that gets mentioned is to use hooks to customize the build (or use a plugin that should do the same)
lastly, for some odd reason, the #nativescript/schematics seems to generate a default project that contains two files called environment.ts and environment.prod.ts. I suspect those only work for the web version of the project (read: ng serve) - I wasn't able to get the mobile compiler to recognize files that end with debug.ts, prod.ts or release.ts
While it may be possible that what I am trying to do isn't just supported (yet?), the general confusion an dissenting opinions on the matter make me think I may be missing something.. somewhere.
In case this IS somehow supported, I also wonder how it may integrate with the NativeScript Sidekick app that is often suggested as a tool to ease the build/run process of NativeScript applications (there is no way to specify additional parameters for the tns commands that the Sidekick automates, the only options available are switching between debug/release mode), but this is probably better to be left for another question.
Environment files are not yet supported, passing environment variables from build command could be the viable solution for now.
But of course, you may write your own schematics if you like immediate support for environment files.
I did not look into sharing environment files between web and mobile yet - I do like Manoj's suggestion regarding modifying the schematics, but I'll have to cross that bridge when I get there I guess. I might have an answer to your second question regarding Sidekick. The latest version does support "Webpack" build option which seems to pass the --bundle parameter to tns. The caveat is that this option seems to be more sensitive to typescript errors, even relatively benign ones, so you have to be careful and make sure to fix them all prior to building. In my case I had to lock the version of #types/jasmine in package.json to "2.8.6" in order to avoid some incompatibility between that and the version of typescript that Sidekick's cloud solution is using. Another hint is to check "Clean Build" after npm dependency changes are made. Good luck!
I'm processing a variety of RSS feeds, which contain summaries, as well as the target page URL content, and trying to use a uniform transformation method.
XSLT was the first thing that occurred to me to try, as it would accomplish what I want, in a standard way, without a lot of fuss aside from adding new XSLT stylesheets to accommodate uniquely formatted sites and feed content.
Problem: XSLT libraries are considered "private" in iOS, and even linking statically against your own copy will get you rejected by the Apple Store analysis tools.
I've looked into the possibility if injecting the stylesheet and data into a UIWebView that wasn't displayed, but this seems like a really roundabout and hackish way to get at the system's underlying XSLT processor in an "approved" fashion.
What alternative techniques/libraries exist which would let me do this in a standard fashion, ie: without rolling my own.
I'm not sure I fully understand your requirements, but one possbility would be to use libxml (which is allowed in iOS) to parse the XML and if necessary manipulate the DOM. If you really need to do XML transformations this is going to be more effort than XSLT, but if you just need to extract data from the XML, that can be done fairly easily with xpath queries.
That said, I have read several people claiming they got XSLT working on iOS and had their apps approved in the app store. In particular, I've seen this stackoverflow answer claimed as a working solution by multiple people. And if that fails, another answer suggested building the libxslt library yourself with renamed symbols to bypass the app store checks. I would only suggest that as a last resort though.
You'll probably want to look into Hpple for something powerful but light weight / native. See the tutorial on getting started here: http://www.raywenderlich.com/14172/how-to-parse-html-on-ios. Good luck!
I'm going to also recommend TFHpple but I'm also going to elaborate on the solution. I've explored an app that navigates a 3rd party (well, I'm the 3rd party, they're the source but that's semantics) website/data source but there are some pitfalls. The biggest pitfall is obvious: if the data source DOM changes you need to change your app and re-release. A creative way around this would be to publish/expose a global copy of the DOM on a public server that way the end user doesn't have to update their app any time the data source changes (as long as the change isn't radical).
For instance, if your expected DOM search in TFHpple is #"//figure[#class='figure']/a" and then a week from now your data source's resource you're looking for is altered to #"//figure1[#class='figure1']/a" you just opened yourself to an App Store release... UNLESS... you publish the expected DOM searches on a web server you control in a data dictionary that your app can consume and serve out to the various DOM search elements within your app. The only problem I foresee here is that if the data source adds or removes a data element you want to consume you either have to release a build or handle the removal ahead of time (respectively).
Lastly if the data source DOM isn't well formed or consistent you may be beating your head against a wall more times than not.
I have lots of scanned images of a magazine(published monthly) and i have to organize it in searchable manner.
User should be able to view magazine issue wise or can search for predefined categories/keywords.
What i have thought for now, is to create CHM as it will need less effort than creating a new custom built software.
For that i will create seperate HTMl page(Programatically) with image embedded in it along with the keywords(Stored in Excel sheet along with path of Image) for which that image should be included in result.
So i want a chm creator that can parse html meta tags and add keywords in chm keywords list.
One such software i have found is Abee CHM Maker
But i need some free alternative.
If you have any other idea to organize it with minimal efforts, then also you are welcome...
The standard (free) way to create chm files is using Microsoft's HTML help workshop:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms670169(v=vs.85).aspx
Kind regards,
Bo
Free Pascal has a CHM creator package, a html DOM implementation and a basic commandline compiler for CHM projects (.hhp). The creator package is independent of MS tools or any other binary blob, and available in source. It is portable as far as FPC is portable (not as portable as gcc on paper, but enough in practice with all major architectures and OSes supported)
One could make something like that, I made something similar, but instead of meta, I folded back titles into TOC and index and cleaned up html (TeX4ht output) and fixed links before turning it into a chm.
But it will require some work, and if you are not familiar with Object Pascal/Delphi (the language), it might be a bridge too far. (the hours required would not compare favorably with the costs of the Abee thing, if that would suit your goals).
On the other hand, in a freely programmable system you can decide yourself how far you automatize things. I put in a lot of work once, and now all new output of tex4ht (with a certain fixed set of settings) formats nicely to chms.
See if this helps you (it certainly does what you need):
KEL CHM Creator: http://dumah7.wordpress.com/2009/02/17/kel-chm-creator-v-1-4-0-0/
Alternatively, I think you could add tags on each picture (right click on it-> Properties->Details->Tags) and use Windows explorer for searching them. I have never done this but it is supposed to be working (I guess).
Since MathWorks release a new version of MATLAB every six months, it's a bit of hassle having to set up the latest version each time. What I'd like is an automatic way of configuring MATLAB, to save wasting time on administrative hassle. The sorts of things I usually do when I get a new version are:
Add commonly used directories to the path.
Create some toolbar shortcuts.
Change some GUI preferences.
The first task is easy to accomplish programmatically with addpath and savepath. The next two are not so simple.
Details of shortcuts are stored in the file 'shortcuts.xml' in the folder given by prefdir. My best idea so far is to use one of the XML toolboxes in the MATLAB Central File Exchange to read in this file, add some shortcut details and write them back to file. This seems like quite a lot of effort, and that usually means I've missed an existing utility function. Is there an easier way of (programmatically) adding shortcuts?
Changing the GUI preferences seems even trickier. preferences just opens the GUI preference editor (equivalent to File -> Preferences); setpref doesn't seems to cover GUI options.
The GUI preferences are stored in matlab.prf (again in prefdir); this time in traditional name=value config style. I could try overwriting values in this directly, but it isn't always clear what each line does, or how much the names differ between releases, or how broken MATLAB will be if this file contains dodgy values. I realise that this is a long shot, but are the contents of matlab.prf documented anywhere? Or is there a better way of configuring the GUI?
For extra credit, how do you set up your copy of MATLAB? Are there any other tweaks I've missed, that it is possible to alter via a script?
shortcuts - read here and here
preferences - read http://undocumentedmatlab.com/blog/changing-system-preferences-programmatically/
At the moment, I'm not using scripts, though this sounds like a very interesting idea.
Unless there are new features that you also want to configure, you can simply copy-paste the old preferences into the new prefdir. I guess this should be doable programmatically, though you might have to select the old prefdir via uigetdir. So far, this has not created major problems for me. Note also that in case of a major change in the structure of preferences, any programmatic version would have to be rewritten as well.
I'm adding paths at each startup, so that I don't need to think of manually adding new directories every time I change something in my code base (and I don't want to have to update directory structures for each user). Thus, I also need to copy-paste startup.m for each installation.
If I had to do everything manually, I'd also want to change the autosave options to store the files in an autosave directory. If I recall correctly, Matlab reads the colors and fonts from previous installations, so I don't have to do that.
MonoDevelop creates those for every project. Should I include them in source control?
From a MonoDevelop blog post:
There were several long time pending
bug reports, and I also wanted to
improve a bit the performance and
memory use. MonoDevelop creates a
Parser Information Database (pidb)
file for each assembly or project.
This file contains all the information
about classes implemented in an
assembly, together with documentation
pulled from Monodoc. A pidb file has
trhee sections: the first one is a
header which contains among other
things the version of the file format
(that version is checked when loading
the pidb, and the file will be
regenerated if it doesn't match the
current implementation version). The
second section is the index of the
pidb file. It contains an index of all
classes in the database. The index is
always fully loaded in memory to be
able to quickly locate classes. The
third section of the file contains all
the class information: list of
methods, fields, properties,
documentation for each of those, and
so on. Each entry in the index has a
file offset field, which can be used
to completely load all the information
of a class (the index only contains
the name).
So it sounds like it's really just an optimization. I would personally not include it in source control unless you find it makes a big difference to performance: my guess is it will only really stay valid if only one person is working on the project at a time. (If it's big and changes regularly, you could find it adds significant overhead to the repository too. I haven't checked to see what the size is actually like, but it's worth checking.)
They're just cached code completion data. As the post Jon linked explains, the main reason is to save memory, though they do also save you from waiting for MD to parse all the source files and referenced assemblies when you open a project.
The pidb files can be regenerated pretty quickly, so there's no advantage to keeping them in the VCS. Indeed, as well as the VCS repository overhead, it could also cause problem if people are using different versions of MD with different pidb formats, so I'd strongly recommend against keeping them in source control.