Line overflow in Cadence EDI - tcl

I am working on a script in Cadence EDI tool (this is TCL based i.e. the EDI shell is TCL based). My code looks something like-
namespace eval clockgatecloning {
....
.....
......
...
}
There are a number of nested statements, procs calling each other.
Now I am working on a big database which is providing a set of data each time to this code and this happens roughly 5000 times. I left my code running overnight as it ran properly (dumping out some data on the shell at each iteration. However today when I checked, there was this message displayed-
<<: Line overflow.
Is this an error? Has my run completed? This has happened before once and I have no clue why. Has it something got to do with memory ?
Please help me out.

Tcl itself does not have limits on the length of lines; if you want a line many megabytes long, you can have it. (It's probably not a good idea if you don't strictly need it, but that's your call.) This applies to both lines in scripts and lines in data files. The main limits have to do with how much memory you've got, and exceeding them won't produce that error message. Indeed, that message is not present anywhere in the Tcl source code.
It's entirely possible that Cadence EDI may have its own limits, but these are more likely to be somewhere like logging or in parts that are not Tcl-related (though they are obviously accessed via some interface that ends up exposed to the Tcl level).

The code is 1000 lines plus and basically is performing an operation on clock-gates in a design in EDI. There are around 5000 clock-gates to clone in the design (the operation is cloning and reassigning the sinks of 1 clock-gate to its new clones).
EDI runs on a load sharing facility to which I have allotted some memory for this task.
Ideally, when the code runs, a message for each clock-gate is dumped to the EDI shell implying that is making changes to the design. A snapshot of this is-
Flip instance u_cheetah_core/uvincero_mpupd/uvincero_cpu_l2/uCORTEXA9MP/u_falcon_cpu_power_wrapper0/u_cpu/u_noram/u_core/u_de/u_neon/umcr_mrc_if/RC_CG_HIER_INST1390/RC_CGIC_INST_1 to match row orient.
Flip instance u_cheetah_core/uvincero_mpupd/uvincero_cpu_l2/uCORTEXA9MP/u_falcon_cpu_power_wrapper0/u_cpu/u_noram/u_core/u_de/u_neon/uniq/RC_CG_HIER_INST1401/RC_CGIC_INST_1 to match row orient.
Flip instance u_cheetah_core/uvincero_mpupd/uvincero_cpu_l2/uCORTEXA9MP/u_falcon_cpu_power_wrapper0/u_cpu/u_noram/u_core/u_de/u_neon/uniq/RC_CG_HIER_INST1402/RC_CGIC_INST_2 to match row orient.
Flip instance u_cheetah_core/uvincero_mpupd/uvincero_cpu_l2/uCORTEXA9MP/u_falcon_cpu_power_wrapper0/u_cpu/u_noram/u_core/u_de/u_neon/uniq/RC_CG_HIER_INST1404/RC_CGIC_INST_1 to match row orient.
Flip instance u_cheetah_core/uvincero_mpupd/uvincero_cpu_l2/uCORTEXA9MP/u_falcon_cpu_power_wrapper0/u_cpu/u_noram/u_core/u_de/u_neon/uniq/RC_CG_HIER_INST1405/RC_CGIC_INST_1 to match row orient.
Today morning when I checked the shell, I realised that it had exitted the code with this message-
:>> Line Overflow.
I checked the EDI log file and even there it showed a similar message.

Related

Is there a way to implement true multi-threading within a puppeteer application?

This may be a really silly question. I admit I'm a bit naive to the chrome web engine, and JS v8 engine's capabilities.
But say, I'm running a puppeteer application which is scraping URL's from tags, and pushing them to an array called img2arr
Then, I have a local file var img1 = ./image.jpg (in quotes).
Last, I have a function compare(img1, img2arr) which takes in both of these as arguments, and using a library such as blink-diff, or Jimp, analyzes and compares img1 against each image in img2arr. This all happens in a .forEach() or .map loop which works, but can be slow as the image2arr grows.
Say it contains 500 image URL's - is there a way to use service workers, a specific Node.js library, or anything to ensure that my image looping and comparing logic all happen across multiple threads?
For instance 200 loops, comparing two 12KB images takes 7 seconds, but with my blazing fast 12 cores processor couldn't it take less than 1?
Sorry for my obvious naivety!
There is a few possible ways:
1. Use spawn to run separately your script on every core of your machine. Here is really nice article.
2. Use node.js worker threads here is you can find how it works with examples.
Not depending the way of implementation you decide, main part would be to collect all data, up to this point
Last, I have a function compare(img1, img2arr) which takes ....
Then, separate your array of img2arr to chunks. You can choose any method from this article
After separation, pass each chunk to separate process and wait for first process to find the similar image.
When the process finds out similar image, you can kill all other processes from your master process.
So, the full process would be:
1. Collect images to compare
2. Split images to compare into chunks.
3. Separate comparison logic to separate file and run a thread/process on this file.
3a. Send chunks to available process
3b. Wait for first success return from process and kill all other ones.

WinDbg single step exception not firing

I am debugging an exe (x86) in WinDbg because it is crashing on my computer, the devs provide no support and it's closed source.
So far I found out that it crashes because a null pointer is passed to ntdll!RtlEnterCriticalSection.
I'm trying to find the source of that null pointer and I've reached a point (my "current point") where I have absolutely no idea where it was called from. I tried searching the area of the last few addresses on the stack, but there were no calls, jumps or returns at all there.
The only thing I have is the last dll loaded before the crash, which is apparently also long (at least a few thousand instructions) before my current point.
I can't just set a few thousand break points, so I thought single step exceptions could help (I could at least print eip on every instruction, I don't care if that would take days).
But I can't get the CPU to fire the exception! After loading the exe, I enter the following in the debugger:
sxe ld:<dll name>
g
sxe sse
sxe wos
r tf=1
g
The debugger breaks for the loaded dll where I want it to, but after the second g, the program just runs for a few seconds before hitting the crash point, not raising any single step exception at all.
If I do the same without the first two lines (so I'm at the start point of the program), it works. I know that tf is set to zero every time a SSE is fired, but why doesn't it fire at all later in the program?
Am I missing something? Or is there any other way I could find the source of that null pointer?
g is not the command for single stepping, it means "go" and only breaks on breakpoints or exceptions.
To do single stepping, use p. Since you don't have the source code, you cannot do instruction-stepping on source code level, meaning that you have to do it on assembly level. (Assembler instruction stepping should be default, it not enable it with l-t.) Depending on how far you need to go, this takes time.
Above only answers the question as it is. The open question is, like pointed out in the comments already, what will you do to mitigate that bug? You can't simply create a new critical section nor do you know which existing critical section should be used in that place.

SSIS 2012 - script task won't run second time (unless debugging)

I'm getting a really tough error in SSIS 2012.
I am just running in SSDT.
I have a script task inside a For...Each block.
It runs fine the first time it is reached.
The second time it is reached, I get a generic "Exception thrown by object of invocation error", attributed to the script, at the script task.
It is a small script, all inside Main(), and with a Try...Catch block.
I am not hitting the Catch, which adds custom text.
It seems like it is behaving as if it never enters the Script...
except
if I actually set a breakpoint in it.... in which case it runs fine,
whether I step line-by-line or just hit F5.
I know this isn't terribly specific, but I'm hoping someone has seen this.
Has anyone seen anything like this before?
As mentioned, I have tried debugging (obviously), but then I don't get any error.
I have tried changing my variable access from the basic to through VariablesDispenser.LockOneForRead, in case it is something with variables that happens before Main().
I think I got all the places the variables are used in the loop, but that didn't help.
Because this was so killer, I'm going ahead and answering it.
It was actually an un-"declared" variable, but in my Catch block.
Copy-paste error :/
I was using a variable as
"Dts.Variables["TaskName"]"
in the Catch block, but I had not selected it in the Script Task window.
I have no idea why it didn't give me the specific "not found in collection" error.
I have certainly run into this before and seen that. :/
Just ran into that and it was a bear to figure out.
What it was was that I had a static variable (actually a singleton class) defined. Evidently, SSIS does NOT re-initialize a program on second and subsequent invocation, but holds the image and simply re-launches at its entry point.
My Singelton class (and I've verified for several static variables now) does NOT get re-initialized. It still exists. The issue was that it was created with the Dts Variable set that existed on first invocation of the script. Since it's "self" values was not null it never re-instantiated.
When I recognized what was happening, it was of course easy to fix, but one gets used to a stand-alone environment where every program instance has its static values null or set with a static initial value. We presume automatically that a new "run" of the program will likewise have its global spaces "clean" .... in point of fact I'm fairly sure that was what I read as part of the C# "contract" that I'd never need to worry about historical cruft in memory spaces for variables.
Well it turns out that that "contract" is about as good as any Microsoft will make you sign.
It's actually a mixed blessing. Knowing that it happens I can use it to save a lot of overhead in scripts invoked in loops ... but as it's not well, or perhaps un- documented I'll need to be careful to have work-arounds and default loading tests if it turns out not to be true in some future release or version.
(Be gentle in your criticism... I'm new to SSIS. Not so new to program paradigms. CICS mainframe programs would re-init global spaces unless you did things in the linkage to signal it not to ... if you're going to re-invent wheels at least look at old wheels).
-- TWZ

What's debug section in IDA Pro?

I try to analyze a dll file with my poor assembly skills, so forgive me if I couldn't achieve something very trivial. My problem is that, while debugging the application, I find the code I'm looking for only in debug session, after I stop the debugger, the address is gone. The dll doesn't look to be obfuscated, as many of the code is readable. Take a look at the screenshot. The code I'm looking for is located at address 07D1EBBF in debug376 section. BTW, where did I get this debug376 section?
So my question is, How can I find this function while not debugging?
Thanks
UPDATE
Ok, as I said, as soon as I stop the debugger, the code is vanished. I can't even find it via sequence of bytes (but I can in debug mode). When I start the debugger, the code is not disassembled imediately, I should add a hardware breakpoint at that place and only when the breakpoint will be hit, IDA will show disassembled code. take a look at this screenshot
You see the line of code I'm interested in, which is not visible if the program is not running in debug mode. I'm not sure, but I think it's something like unpacking the code at runtime, which is not visible at design time.
Anyway, any help would be appreciated. I want to know why that code is hidden, until breakpoint hit (it's shown as "db 8Bh" etc) and how to find that address without debugging if possible. BTW, could this be a code from a different module (dll)?
Thanks
UPDATE 2
I found out that debug376 is a segment created at runtime. So simple question: how can I find out where this segment came from :)
So you see the code in the Debugger Window once your program is running and as you seem not to find the verry same opcodes in the raw Hex-Dump once it's not running any more?
What might help you is taking a Memory Snapshot. Pause the program's execution near the instructions you're interested in to make sure they are there, then choose "Take memory snapshot" from the "Debugger" Menu. IDA will then ask you wether to copy only the Data found at the segments that are defined as "loder segments" (those the PE loader creates from the predefined table) or "all segments" that seem to currently belong to the debugged program (including such that might have been created by an unpacking routine, decryptor, whatever). Go for "All segments" and you should be fine seeing memory contents including your debug segments (a segment
created or recognized while debugging) in IDA when not debugging the application.
You can view the list of segements at any time by pressing Shift+F7 or by clicking "Segments" from View > Open subviews.
Keep in mind that the programm your trying to analyze might choose to create the segment some other place the next time it is loaded to make it harder to understand for you what's going on.
UPDATE to match your second Question
When a program is unpacking data from somewhere, it will have to copy stuff somewhere. Windows is a virtual machine that nowadays get's real nasty at you when trying to execute or write code at locations that you're not allowed to. So any program, as long as we're under windows will somehow
Register a Bunch of new memory or overwrite memory it already owns. This is usually done by calling something like malloc or so [Your code looks as if it could have been a verry pointer-intensive language... VB perhaps or something object oriented] it mostly boils down to a call to VirtualAlloc or VirtualAllocEx from Windows's kernel32.dll, see http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887(v=vs.85).aspx for more detail on it's calling convention.
Perhaps set up Windows Exception handling on that and mark the memory range als executable if it wasn't already when calling VirtualAlloc. This would be done by calling VirtualProtect, again from kernel32.dll. See http://msdn.microsoft.com/en-us/library/windows/desktop/aa366898(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/windows/desktop/aa366786(v=vs.85).aspx for more info on that.
So now, you should take a step trough the programm, starting at its default Entrypoint (OEP) and look for calls tho one of those functions, possibly with the memory protection set to PAGE_EXECUTE or a descendant. After that will possibly come some sort of loop decrypting the memory contents, copying them to their new location. You might want to just step over it, depending on what your interest in the program is by justr placing the cursor after the loop (thick blue line in IDA usually) and clicking "Run to Cursor" from the menu that appears upon right clicking the assembler code.
If that fails, just try placing a Hardware Breakpoint on kernel32.dll's VirtualAlloc and see if you get anything interestin when stepping into the return statement so you end up wherever the execution chain will take you after the Alloc or Protect call.
You need to find the Relative Virtual Address of that code, this will allow you to find it again regardless of the load address (pretty handy with almost all systems using ASLR these days). the RVA is generally calculated as virtual address - base load address = RVA, however, you might also need to account for the section base as well.
The alternative is to use IDA's rebasing tool to rebase the dll to the same address everytime.

What is an efficient way for logging in an existing system

I have the following in my system:
4 File folders
5 Applications that do some processing on files in the folders and then move files to the next folder (processing: read files, update db..)
The process is defined by Stages: 1,2,3,4,5.
As the files are moved along, the Stage field within them is updated to the next Stage.
Sometimes there are exceptions in the system, not necessarily exception in code but exception in the process.
For instance, there is an error in transmitting the file to the next folder. In this case the stage is not updated and an record is written in the DB for this file.
What I want to do, what is the best approach?
I want to plug a utility of some sort or add code to the applications that will capture any exceptions in the process. Like if a file was not moved, I want to know what stage and why. This will help in figuring out the break down in the process.
I need something that will provide the overall health of the process.
Now sure how to go about doing this from an architectural point of view.
The scheduler? Well that might knock the idea out anyway.
Exit code is still up and running from dos days.
it's a property of the Application Class (0 the default) is success
So from your app you'd detect an error and set ApplicationExitCode to some meaning number like 1703 (boo hoo)
Application.ShutDown(1703);// is the .net4 way
However seeing as presumably the scheduler is just running the app, you'd have to script it all up. Might as well just write a common logging dll and add it to each app as mess about with that, especially if you want the same behaviour if it's run from outside the scheduler.
Another option would be delegating. ie you write an app that runs the app (passed in as a command line parameter) and logs the result (via exit code for instance) and then change scheduler items to call that with the requisite parameter.