TCL Info frame not giving correct file-name and line-number? - tcl

We have a problem with info frame not giving the correct file-name and line-number.
We are using Tcl 8.6 with some code forked for proc and source, we are overriding source command with some custom code, so that some of the lines can be skipped, After this, info frame isn't working for this forked version.
Is there a solution for this?

Alas, if you override bits and pieces of code like that then you lose the file tracking. It's done by tracing identities of literals and it really doesn't like what you're up to there.
Formally, the file tracking is known to be theoretically flaky, but it doesn't seem to be much of a problem in practice unless you get into the level of processing that you're up to. One possible workaround is to do your preprocessing by changing files into other files (e.g., in a “deploy” directory) so that source and proc can stay conventional. (Doing that sort of copy is also pretty much what you do when you build an application; you've just got a filtering copy instead of a simple one.)
Details
The following two locations in Tcl's source contain the heart of the problem for you:
generic/tclProc.c lines 199–280, Tcl_ProcObjCmd()
generic/tclIOUtil.c, lines 1955–1959, TclNREvalFile() (and probably lines 1819–1823, Tcl_FSEvalFileEx() too, if you want to do file evaluation that's not source)
You want that code in tclProc.c to trigger so that the frame data is built for the procedure, but for that you need the trigger in tclIOUtil.c to set the triggering action. Your changes to proc and source block both of these. The info frame command reads the data that that block in tclProc.c generates.
Perhaps the easiest way — if you're building custom C in the first place — is to insert your processing in those two functions in tclIOUtil.c; I'd do it by calling a shared function that modifies the contents of the buffer in the Tcl_Obj passed in (which will be single-referenced at that point, and hence writable). Just… don't alter the number of newlines if you want the data out of info frame to be sensible.

Related

How to make a function file in Ocatve with multiple functions

I know that you can make a function file in Octave in which the file name is the same as the function which defines one function, but I would like to define multiple functions in one file. Is there any way to do this, or do I need a separate file for each function.
In this answer I will assume that your main objective is a tidy workspace rather than explicitly a one-file requirement.
Let's get the one-file approach out of the way. You can create a script m-file (not a function m-file), and define a number of command-line functions there. The octave manual has a section on this. Here's an example:
% in file loadfunctionDefinitions.m
1; % statement with side-effect, to mark this file as a script. See docs.
function Out = Return1(); Out = 1; end
function Out = Return2(); Out = 2; end
% ... etc
% in your main octave session / main script:
X = Return1() + Return2();
However, this is generally not recommended. Especially if you would require matlab compatible code, since matlab introduced 'script-local functions' much later than octave, and decided to do it in a manner incompatible to the existing octave implementation: matlab expects script-local functions to be defined at the end of the script; octave expects them to be defined before first use. However, if you use normal function files, everything is fine.
While I appreciate the "I don't like a folder full of functions" sentiment, the one-function-per-file approach actually has a lot of benefits (especially if you program from the terminal, which makes a wealth of tools twice as useful). E.g. you can easily use grep to find which functions make use of a particular variable. Or compare changes in individual functions from different commits, etc.
Typically the problem is more one of having such function files littering the directory, when other important files are present, e.g. data etc, and having so many files in one place makes finding what you want hard to spot, and feels untidy. But rather than have a single file with command-line definitions, there are a number of other approaches you can take, which are probably also better from a programmatic point of view, e.g.:
Simply create a 'helper functions' folder, and add it to your path.
Use subfunctions in your main functions whenever this is appropriate, to minimize the number of unnecessary files
Use a private functions folder
Use a 'package directory', i.e. a folder starting with the character '+', which creates a namespace for the functions contained inside. E.g. ~/+MyFunctions/myfun.m would be accessed from ~/ via MyFunctions.myfun(), without having to add +MyFunctions to the path (in fact you're not supposed to).
Create a proper class directory, and make your functions methods of that class
The last option may also achieve a one-file solution, if you use a newer-style classdef based class, which allows you to define methods in the same file as the class definition. Note however that octave-support for classdef-defined classes is still somewhat limited.

PhpStorm: how to apply external tool (jpegoptim) on many files?

I am using jpegoptim in PhpStorm as external tool.
Works fine when I do select 1 file.
How can I apply that on many JPEG files ?
That's not possible at the moment (not supported).
https://youtrack.jetbrains.com/issue/IDEA-90239
https://youtrack.jetbrains.com/issue/IDEA-97869
Watch these tickets (star/vote/comment) to get notified on any progress.
If you definitely need it in one go (and not calling that External Tools entry once for each file)... then what you may try is:
Select desired files
Use Copy Paths from context menu
Call another External Tools entry that:
Uses $ClipboardContent$ macro
It's some shell/batch script that parses such parameter (splits into separate lines to get individual paths) and then calls actual program in cycle -- once for each file from the parsed parameter.
A bit too complicated as for my liking... Plus, I've not tried it myself so not sure how line ending symbols will be passed here (so it can be parsed in the script).
BTW -- you can assign custom shortcut to particular External Tools entry so you may call it for each file individually -- it will be faster with shortcut than doing the same with the mouse.

Perform action before save (`on_pre_save`)

import sublime_plugin
class Test(sublime_plugin.EventListener):
def on_pre_save(self, view):
view.set_syntax_file("Packages/Python/Python.tmLanguage")
Here is simple example. Logically (from my point of view), it should change syntax before saving, and so, the file should be saved as <filename>.py.
But actually, the syntax will be changed after the save operation. So, if I originally worked with js file, it will be saved as js, not py.
I'm wondering why on_pre_save works so strange, or, in other words, is there any difference between on_pre_save and on_post_save. Also, and that's my practical interest, how I can perform some arbitrary(1) action right before saving?
(1) I've specifically use the word "arbitrary", because I don't mean only syntax changes. It may be something different. For example, change the font from Consolas to Times New Roman.
The on_pre_save event happens just before a file buffer is written to disk, and allows you to take any action that you might want to take before the file on disk changes, for example making some change to the content of the buffer (e.g. "reformat on save").
The on_post_save event happens just after the file buffer has been written to disk, allowing you to take any action you might want to take after a save operation, for example examining the contents of the buffer once it's "final" (e.g. "lint on save", which if done through an external tool requires the changes to be on disk and not just in memory).
In either case the file name of the file has already been selected by the user at the time the event happens. For a new file, that means that on_pre_save doesn't happen until after they've selected the name and location of the file. For an existing file, save just resaves with the same filename.
To answer your question, you can do most any "arbitrary" thing you want in the on_pre_save to have it happen prior to when a save happens. It's also possible to change the filename in that situation if you really want to.
Note however that changing the filename out from under the user without asking them first is decidedly bad UX. Additionally, if you change the filename to a file that already exists from within on_pre_save sublime will blindly overwrite the file with no warnings, which is also Bad Mojo.
For something that's going to alter the name and location of the file on disk, the more appropriate way to go is have a command the user has to explicitly invoke to make that happen so that they're fully aware of what's going on.
As requested in a comment and for completeness, here's an example that does what you wanted your example code above to do.
The important thing to note here is that you have to be extremely careful about the situation that you trigger this event in. As written above, your plugin would make it impossible to ever save any kind of file ever due to it swapping over to a python file instead.
In this example it's constrained to only take effect on a text file, turning it into a python file. Note however that if there was a python file with that name already in that location, it would overwrite it without warning you that it's about to happen.
Be extremely wary with this code; it's quite easy to accidentally stop yourself from being able to save files with the correct name, which could for example stop you from being able to use Sublime to fix the code, amongst other nasty issues.
import sublime_plugin
import os
class TestListener(sublime_plugin.EventListener):
def on_pre_save(self, view):
# This part is extremely important because as mentioned above it's
# entirely disconcerting for your save operation to gank your
# filename and make it suddenly be something else without any
# warning. If you're not careful you might destroy your ability to
# use sublime to fix your plugin, for example.
if not view.file_name().endswith(".txt"):
print("Doing nothing for: ", view.file_name())
return
# HUGE WARNING: This CAN and WILL willfully clobber over any file
# that already happens to exist without any warning to you
# whatsoever, and is most decidedly a Bad Idea(tm)
python_name = os.path.splitext(view.file_name())[0] + ".py"
view.retarget(python_name)

Verify a Tif with ApprovalTests

I have been asked to update a system where header information gets injected into a tif via a 3rd party console application. I don't need to worry about that bit.
The part I have been asked to look at it the merge process that generates the header information.
The current file generated by the process is assumed as correct, before I make any changes, so I want to add this as an approved result, from that I can then check that the changes I make will alter the file as expected.
I thought this would be a good opportunity to look at using ApprovalTests
The problem I have is that for what ever reason the links to the videos are considered corruptible (Possibly show me kittens jumping into boxes or something, which will stop me working, which ironically means I slow down my work done because I cannot see any help videos).
What I have been looking at is the Approvals.Verify and Approvals.VerifyFile extensions.
But what appears to be happening is confusing me.
using VerifyFile creates a received file, but the contents of the file are just a line the name of the file I have asked it to verify.
using Verify(new FileInfo("FileNameHere")) does not appear to generate the received file that I need to flag as approved, but the test does return saying that it cannot find the approved tif file.
I am probably using VerifyFile completely wrong and might be looking at using Verify wrong as well.
useful info?
Might be useful to know, that as this is a legacy application, running as a windows service, I have wrapped the service in a harness that allows me to call the routines, so the files are physically being written elsewhere on the machine outside of my control (well there is a config, but the return of the service I call generates a file in a fixed location if it is successful). I have tried copying that into the Unit Test project, but that doesn't appear to help.
Verify(File) and VerifyFile(string) are both meant to verify an existing file. As such they merely setting the received file to the file you pass in. You will still need to move/approval/create the approved file.
Here is the pseudo code and process.
[UseReporter(typeof(DiffReporter), typeof(ClipboardReporter)]
public void TestTiff()
{
string tif = YourProcessToCreateTifFile();
Approvals.VerifyFile(tif);
}
[Note: if you don't have an image diff installed, like TortoiseDiff, you might want to use the FileLauncherReporter]
Run this, once you get the result, move the file over by pasting your clipboard into a cmd window.
It will move the temporary tif to your test directory with the name ClassName.TestTiff.approved.tif
After that the test should pass until something changes.
Happy Testing!

Design Pattern to require multiple events before executing method?

There are many times that I've needed to execute some code after a number of events have fired, and I've come up with counters and such but I feel there must be a better way.
For example, say five files need to be loaded, after which a UI component will become active.
If I set up a counter that increments each time a file is requested, then decrements each time one has loaded, I run the risk that the first two or three files may somehow get completely loaded before my code gets around to requesting the fourth and fifth, which would mean that my counter would be at zero when I still have two files to load, thus allowing the UI component to be prematurely activated.
There are some cases where you could know the number that need to be loaded before the requests go out, but it's possible that the first file contains the paths (and therefore the number of) files. (And this file-loading scenario is only an example of the pattern I'm trying to explain.)
Does anyone have an elegant solution for this? (Does my description make sense?) Thanks!
You could do something with a task framework like spicelib
Using that as an example
Create a FileRecursionLoadTask which grabs a file and completes when that file and any references it makes are loaded.
Add each FileRecursionLoadTask to a SequentialTaskGroup.
When the TaskGroup is completed, then you know all of the file loads have completed.
There are also plenty of other task frameworks which you might like better. For example, Spring ActionScript also has one.
Before executing a request, store a reference (a unique request uri, the loader object or a special command object) in a list. When a loader has finished, remove that object and call a function that checks if there are remaining active tasks in the list.
This isn't specific to file requests nor request in general, it can be used for anything that needs to wait for multiple actions to finish. Multiple list can be used to process multiple types of action at the same time. The object stored in the list could be implemented as a command object, which could provide more information about the task. This is called command pattern.
If you're doing just loading, like Jacob, I would also suggest a library that handles loading
If the case of a more complicated situation like mixing loaders and other event listeners, I would suggest using an event that fires whenever there is any change to any of the dependencies. In addition all the objects/classes would have a state.
Then I would create a listener adding function for the class that would need to do the function or initiate it, that would have 3 parameters
object with event dispatcher (assuming they all use the same update event) ie. assetLoader
name of object state ie. headerLoaded
state value's desired ie. true
the function would add the listener to a chain of listeners, and any time any of the listeners fires, all objects would check if the state value.
This would allow for regression as well (like when a user presses a button, the content starts loading, but then the user presses cancel, even if all the assets load, the state of one object would be false, thus not allowing the item to complete) If you were using counters, it would be the equivalent to adding instead of subtracting, but much more reliable.
Looking for a design pattern? Try the command pattern (http://johnlindquist.com/2010/09/09/patterncraft-command-pattern/)
(The video is a great example of what command pattern is and how it works - using Starcraft as an example.
The implementation is that you queue your load commands so that they do not execute out of order, and you can add the enable or disable commands to your command que. So the command pattern will play back your commands something like: load, load, load, enable ui item, load, load, enable another item
Good luck