Putting aside the security implications of running a script someone gives me, how can I tell, in advance, that the script requires a certain number of arguments? Without reading the code.
If someone just gives me a script, is there a way to know that it takes 4 arguments or whatever the case may be?
I guess I am looking for a best practices answer. I am obviously not a developer and just curious as to how some things are done.
What kind of script you want to know ? Shell or Windows Batch or Ruby or Python ?
For scripts in Python, It's impossible to know the number of arguments without reading the code. In Python, we can pass any arguments into Python script. The script determines whether to use them.
It's expected etiquette that the script's author(s) provide documentation describing some or all of: the script's purpose, expected arguments and operational modes.
Some scripts generate an abbreviated usage message (listing accepted arguments) when run with an appropriate help switch, eg theScript -h, theScript --help or theScript /?.
Scripts that form part of an installable tool, package or application may have an associated "manpage" (man theScript) or published documentation, eg hypertext pages, text files, printed manuals or pages on the Internet. Such documentation might be found by browsing the filesystem / Start menu (Windows) / provided materials and original installation media or by searching the Web.
Of course, this applies only by convention; generally there is no contract that is enforced on the script by a computer system. If someone is "giving you a script" (of questionable origin) then none of the above is guaranteed.
If you expressly receive a script (containing text readable in an editor and not binary gibberish) then the contents might include a section of prose containing useful information without your resorting to reading and understanding the "code".
Related
I would like to identify all functions needed to run a specific function in octave. I need this to deploy an application written in Octave.
While Matlab offers some tools to analyse a function on its dependencies, I could not find something similar for Octave.
Trying inmem as recommended in matlab does not produce the expected result:
> inmem
warning: the 'inmem' function is not yet implemented in Octave
Is there any other solution to this problem available?
First, let me point out that from your description, the matlab tool you're after is not inmem, but deprpt.
Secondly, while octave does not have a built-in tool for this, there is a number of ways to do so yourself. I have not tried these personally, so, ymmv.
1) Run your function while using the profiler, then inspect the functions used during the running process. As suggested in the octave archives: https://lists.gnu.org/archive/html/help-octave/2015-10/msg00135.html
2) There are some external tools on github that attempt just this, e.g. :
https://git.osuv.de/m/about
https://github.com/KaeroDot/mDepGen
3) If I had to attack this myself, I would approach the problem as follows:
Parse and tokenise the m-file in question. (possibly also use binary checks like isvarname to further filter useless tokens before moving to the next step.)
For each token x, wrap a "help(x)" call to a try / catch block
Inspect the error, this will be one of:
"Invalid input" (i.e. token was not a function)
"Not found" (i.e. not a valid identifier etc)
"Not documented" (function exists but has no help string)
No error, in which case you stumbled upon a valid function call within the file
To further check if these are builtin functions or part of a loaded package, you could further parse the first line of the "help" output, which typically tells you where this function came from.
If the context for this is that you're trying to check if a matlab script will work on octave, one complication will be that typically packages that will be required on octave are not present in matlab code. Then again, if this is your goal, you should probably be using deprpt from matlab directly instead.
Good luck.
PS. I might add that the above is for creating a general tool etc. In terms of identifying dependencies in your own code, good software engineering practices go a long way towards providing maintenable code and easily resolving dependency problems for your users. E.g: -- clearly identifying required packages (which, unlike matlab, octave does anyway by requiring such packages to be visibly loaded in code) -- similarly, for custom dependencies, consider wrapping and providing these as packages / namespaces, rather than scattered files -- if packaging dependencies isn't possible, you can create tests / checks in your file that throw errors if necessary files are missing, or at least mention such dependencies in comments in the file itself, etc.
According to Octave Compatibility FAQ here,
Q. inmem
A. who -functions
You can use who -function. (Note: I have not tried yet.)
I think this is more of a Tcl configuration question rather than a Tcl coding question...
I inherited a whole series of Tcl scripts that are used within a simulation tool that my company built in-house. In my scripts, I'm finding numerous instances where there are function calls to functions that don't seem to be declared anywhere. How can I trace the path to these phantom functions?
For example, rather than use source, someone build a custom include function that they named INCLUDE. Tclsh obviously balks when I try to run it there, but with my simulation software, it runs fine.
I've tried grep-ing through the entire simulation software for INCLUDE, but I'm not having any luck. Are there any other obvious locations outside the simulation software where a Tcl function might be defined?
The possibilities:
Within your software. (you have checked for this).
Within some other package included by the software.
Check and see if the environment variable TCLLIBPATH is set.
Also check and see if the simulation software sets TCLLIBPATH.
This will be a list of directories to search for Tcl packages, and you
will need to search the packages that are located outside of the
main source tree.
Another possibility is that the locations are specified in the pkgIndex.tcl file.
Check any pkgIndex.tcl files and look for locations outside the main source tree.
Within an unknown command handler. This could be in
your software or within some other package. You should be able to find
some code that processes the INCLUDE statement.
Within a binary package. These are shared libraries that are loaded
by Tcl. If this is the case, there should be some C code used to
build the shared library that can be searched.
Since you say there are numerous instances of unknown functions, my first
guess is that you have
not found all the directories where packages are loaded from. But an
''unknown'' command handler is also a possibility.
Edit:
One more possibility I forgot. Check and see if your software sets the auto_path variable. Check any directories added to the auto_path for
other packages.
This isn't a great answer for you, but I suspect it is the best you're going to get...
The procedure could be defined in a great many places. Your best bet for finding it is to use a tool like findstr (on Windows) or grep -R (on POSIX platforms) to search across all the relevant source files. But that still might not help! It might not be a procedure but instead a general command, which could be implemented in C and not as a procedure, or it could be defined in a packaged application archive (which are usually awkward to look inside). There are also other types of script-implemented command too, which could make things awkward. Generally searching and investigating is your best bet, but it might not work.
Tcl doesn't really differentiate strongly between different types of command except in some introspection operations. If you're lucky, you could find that info body tells you the definition of the procedure (and info args and info default tell you about the arguments) but that won't help with other command types at all. Tcl 8.7 will include a command (info cmdtype) that would help a lot with narrowing down what to do next, but that's no use to you now and it definitely doesn't exist in older versions.
I'm creating some default "drag and drop" templates for our developers, and one section is the required tags. Most of the tags reference a variable: nice and easy. But one wants to reference the resource itself and I cannot figure out a way to it. Does anyone have any suggestions?
The tag itself is called "Context" and it's value should be the "type" of the resource it is in, e.g. "Microsoft.Web/serverfarms". This is desired to aid with billing. Obviously I could either create a different template per resource type (not ideal considering the number of different resources) or rely on the devs to update the field manually (not ideal either as relying on them to add the tags manually hasn't worked so far in a lot of cases), but I am trying to automate it.
Extrapolating from the [variables('< variablename >')] function I did try [resources('type')] but Azure complained that "resources is not a valid selection". I thought it might have complained that it couldn't tell which resource to look at, but it didn't get that far. Internet searches have not turned up anything useful so far.
I can't find a way to do this cleanly either (I hope someone corrects me though! This is a topic for us too). The reference and resourceId functions look promising, but both are unavailable inside of the resources block, would require some parsing, and also require the api version, which you probably also need to vary by resource and so you're just back to where you started. ARM won't even let you use a variable for the resource type property(probably a good thing), so that option is out too.
As such, you'll either have to live with your team having to replace that chunk of text manually or pursue some alternative.
The simplest thing that comes to mind would be to write a script in a language that understands JSON. That script reads the template, adds the tag to the resource, then saves the template again.
A similar approach would be to do it after the resources are deployed by writing a script that loops through all resources and making sure they have the tag. You can use automation to schedule this on a regular basis if you're concerned about it being missed. If you're deploying the templates using a script, you could add it in that script too.
There's some things you probably do with nested templates, but you probably wouldn't be making anyone's life easier or making the process more reliable.
This could be achievable potentially through some powershell specifically around Resource and Resource Group. Would need to run a Get-AzResource either at the subscription or potentially just the resource group level. Then pull the ResourceType field from the object return and use a Set-AzResource command passing in the ResourceID from above and the new tag mapped to the returnedResourceType field.
Can someone explain how being a "googler" or not affects how an open source package builds or not?
When attempting to build v8 the build docs state
"If you are a non-googler you need to set DEPOT_TOOLS_WIN_TOOLCHAIN=0"
When I set DEPOT_TOOLS_WIN_TOOLCHAIN to 0 as a "non googler" the build cuts short.
When I set DEPOT_TOOLS_WIN_TOOLCHAIN to 1 as a "googler" the build doesn't cut short but errors out later on in a way that points to requiring a specific hash value on the build system.
When inquiring about the error on the googlegroup v8-users an employee of google stated:
"It wouldn't enter this code if the environment variable I mentioned
was set correctly. If you do enter this code it's not set. And it is
expected to fail"
Which means the build is expected to fail for "non googlers".
He goes on to say that the build platform I'm on is not supported (non googler, no hash value...) yet that "it should compile at least".
?
Can someone explain how "it should compile at least" ?
If you are a "non googler" do you use another build script and build tools ? Possibly get the source otherwise and use different parameters ? Do you even attempt to build the package at all (in the sense that "non googlers" are not meant to build the package)?
If anyone has some experience here it would be helpful as it would save a lot of time and trouble for people trying to build packages with
set DEPOT_TOOLS_WIN_TOOLCHAIN=0 if you are not a googler
Thank you.
You should certainly be able to build V8. You do not need access to any special infrastructure or tooling. There are many V8 committers that are not Google employees.
That particular environment variable DEPOT_TOOLS_WIN_TOOLCHAIN is different for Google employees because of licensing reasons (distributing Microsoft toolchain via depot_tools), but you can build V8 with and without that variable.
I am a new one to Common Lisp (using Clozure Common Lisp under Microsoft Windows), who is familiar with c and python before. So maybe the questions are stupid here, but be patient to give me some help.
1) What's is the usual way to run a common lisp script?
Now, I wrote a bat file under windows to call ccl exe(wx86cl.exe) and evaluate (progn (load "my_script_full_path") (ccl:quit)) every time when I want to "run" my script. Is this a standard way to "run" a script for common lisp?
Any other suggestion about this?
2) What's the difference between (require 'cxml) and (asdf:operate 'asdf:load-op :cxml)?
They are seems to be the same for my script, which one should I use?
3) ignore it, not a clear question
4) When I want to load some library (such as require 'cxml), it always takes time(3s or even 5s) to load cxml every time when I "run" my script, there is also much log to standard output I show below, it seems like checking something internal. Does it means I have to spent 3-5s to load cxml every time when I want to run a simple test? It seems like a little inefficient and the output is noisy. Any suggestion?
My Script
(require 'cxml) (some-code-using-cxml)
And the output
; Loading system definition from D:/_play_/lispbox-0.7/quicklisp/dists/quicklisp/software/cxml-20101107-git/cxml.asd into #<Package "ASDF0">
;;; Checking for wide character support... yes, using code points.
; Registering #<SYSTEM "cxml-xml">
......
some my script output
---EDIT TO ADD MORE----
5) I must say that I almost forget the way of dumping image to accelerate the loading speed of lisp library. So, what is the normal process for us to develop a (maybe very simple) lisp script?
Base on the answer of what I got now, I guess maybe
a) edit your script
b) test it via a REPL environment, SLIME is a really good choice, and there should be many loop between a <==> b
c) dump the image to distribute it?( I am no sure about this)
6) Furthermore, what is the common way/form for us to release/distribute the final program?
For a lisp library, we just release our source code, and let someone else can "load/require" them.
For a lisp program, we dump a image to distribute it when we confirm that all functions go well.
Am I right?
What form do we use in a real product? Do we always dump all the thing into a image at final to speed up the loading speed?
1) Yes, the normal way to run a whole programme is to use a launcher script. However, windows has much, much better scripting support these days than just the bat interpreter. Windows Scripting Host and PowerShell ship as standard.
1a) During development, it is usual to simply type things in a the REPL (Read-Eval-Print-Loop, i.e. the lisp command line), or to use something like SLIME (for emacs or xemacs) as a development environment. If you don't know what they are, look them up. You may wish to use Cygwin to install xemacs, which will give you access to a range of linux-ish tools.
2) Require is, IIRC, a part of the standard. ASDF is technically not, it is a library that operates to make libraries work more conveniently. ASDF has a bunch of features that you will eventually want if you really get into writing large Lisp programmes.
3) Question unclear, pass.
4) See 1a) - do your tests and modifications in a running instance, thus avoiding the need to load the library more than once (just as you would in Python - you found the python repl, right?). In addition, when your programme is complete, you can probably dump an image which has all of your libraries pre-loaded.
Edit: additional answers:
5) Yes
6) Once you have dumped the image, you will still need to distribute the lisp binary to load the memory image. To make this transparent to the user, you will also have to have a loader script (or binary) to run the lisp binary with the image.
You don't have to start the lisp from scratch and load everything over again each time you want to run a simple test. For more efficient development, interactively evaluate code in the listener (REPL) of a running lisp environment.
For distribution, I use Zachary Beane's Buildapp tool. Very easy to install and use.
Regarding distribution -
I wrote a routine (it's at home and unavailable at the moment) that will write out the current image as a standard executable and quit. It works for both CLISP and SBCL.
I can rummage it up if you like.