Fullpath of current TCL script - tcl

Is there a possibility to get the full path of the currently executing TCL script?
In PHP it would be: __FILE__

Depending on what you mean by "currently executing TCL script", you might actually seek info script, or possibly even info nameofexecutable or something more esoteric.

The correct way to retrieve the name of the file that the current statement resides in, is this (a true equivalent to PHP/C++'s __FILE__):
set thisFile [ dict get [ info frame 0 ] file ]
Psuedocode (how it works):
set thisFile <value> : sets variable thisFile to value
dict get <dict> file : returns the file value from a dict
info frame <#> : returns a dict with information about the frame at the specified stack level (#), and 0 will return the most recent stack frame
NOTICE: See end of post for more information on info frame.
In this case, the file value returned from info frame is already normalized, so file normalize <path> in not needed.
The difference between info script and info frame is mainly for use with Tcl Packages. If info script was used in a Tcl file that was provided durring a package require (require package <name>), then info script would return the path to the currently executing Tcl script and would not provide the actual name of the Tcl file that contained the info script command; However, the info frame example provided here would correctly return the file name of the file that contains the command.
If you want the name of the script currently being evaluated, then:
set sourcedScript [ info script ]
If you want the name of the script (or interpreter) that was initially invoked, then:
set scriptAtInvocation $::argv0
If you want the name of the executable that was initially invoked, then:
set exeAtInvocation [ info nameofexecutable ]
UPDATE - Details about: info frame
Here is what a stacktrace looks like within Tcl. The frame_index is the showing us what info frame $frame_index looks like for values from 0 through [ info frame ].
Calling info frame [ info frame ] is functionally equivalent to info frame 0, but using 0 is of course faster.
There are only actually 1 to [ info frame ] stack frames, and 0 behaves like [ info frame ]. In this example you can see that 0 and 5 (which is [ info frame ]) are the same:
frame_index: 0 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
frame_index: 1 | type = source | line = 6 | level = 4 | file = /tcltest/main.tcl | cmd = a
frame_index: 2 | type = source | proc = ::a | line = 2 | level = 3 | file = /tcltest/a.tcl | cmd = b
frame_index: 3 | type = source | proc = ::b | line = 2 | level = 2 | file = /tcltest/b.tcl | cmd = c
frame_index: 4 | type = source | proc = ::c | line = 5 | level = 1 | file = /tcltest/c.tcl | cmd = stacktrace
frame_index: 5 | type = source | proc = ::stacktrace | line = 26 | level = 0 | file = /tcltest/stacktrace.tcl | cmd = info frame $frame_counter
See:
https://github.com/Xilinx/XilinxTclStore/blob/master/tclapp/xilinx/profiler/app.tcl#L273

You want $argv0

You can use [file normalize] to get the fully normalized name, too.
file normalize $argv0
file normalize [info nameofexecutable]

seconds after I've posted my question ... lindex $argv 0 is a good starting point ;-)

Related

How to decipher comments in generated Verilog from chisel?

Here is some genereated Verilog from the PassTrough module found in:
https://github.com/freechipsproject/chisel-bootcamp/blob/master/2.1_first_module.ipynb
module PassTrough( // #[:#3.2]
input clock, // #[:#4.4]
input reset, // #[:#5.4]
input [9:0] io_in, // #[:#6.4]
output [9:0] io_out // #[:#6.4]
);
assign io_out = io_in; // #[buffer.scala 10:10:#8.4]
endmodule
Are there any resources about understanding what is in the comments. I can see that they related to the code location in the original scala file but would like to know more details.
// #[buffer.scala 10:10:#8.4]
A more detailed explanation of this line would be useful.
These are source locators and will show up in generated FIRRTL or Verilog. These tell you what line in a source file (Chisel or FIRRTL) was used to generate a specific line in the downstream FIRRTL or Verilog.
The format is generally: #[<file> <line>:<column> ...]
More than one source locator may be present.
Example
Consider the following example pulled from the BoringUtilsSpec. The line numbers (which do not start at zero as this was extracted from a larger file) are shown along with the column numbers. You can see how things line up between them. For example, the declaration of notA happens on line 27 column 20 and the assignment notA := ~a happens on line 30, column 10. You see 27:20 and 30:10 show up in the FIRRTL. In the Verilog, these get merged somewhat and you wind up with source locators indicating both 27:20 and 30:10:
// -------------------------------------------+----+
// File: BoringUtilsSpec.scala | |
// -------------------------------------------+----+
// Column Number | |
// -------------------------------------------+----+
// 1 2 3 4 | |
// 01234567890123456789012345678901234567890 | |
// -------------------------------------------+----|
class BoringInverter extends Module { // | 24 | Line Number
val io = IO(new Bundle{}) // | 5 |
val a = Wire(UInt(1.W)) // | 6 |
val notA = Wire(UInt(1.W)) // | 7 |
val b = Wire(UInt(1.W)) // | 8 |
a := 0.U // | 9 |
notA := ~a // | 30 |
b := a // | 1 |
chisel3.assert(b === 1.U) // | 2 |
BoringUtils.addSource(notA, "x") // | 3 |
BoringUtils.addSink(b, "x") // | 4 |
} // | 5 |
// -------------------------------------------+----+
This produces the following FIRRTL:
module BoringUtilsSpecBoringInverter :
input clock : Clock
input reset : UInt<1>
output io : {}
wire a : UInt<1> #[BoringUtilsSpec.scala 26:17]
wire notA : UInt<1> #[BoringUtilsSpec.scala 27:20]
wire b : UInt<1> #[BoringUtilsSpec.scala 28:17]
a <= UInt<1>("h00") #[BoringUtilsSpec.scala 29:7]
node _T = not(a) #[BoringUtilsSpec.scala 30:13]
notA <= _T #[BoringUtilsSpec.scala 30:10]
b <= a #[BoringUtilsSpec.scala 31:7]
node _T_1 = eq(b, UInt<1>("h01")) #[BoringUtilsSpec.scala 32:22]
node _T_2 = bits(reset, 0, 0) #[BoringUtilsSpec.scala 32:19]
node _T_3 = or(_T_1, _T_2) #[BoringUtilsSpec.scala 32:19]
node _T_4 = eq(_T_3, UInt<1>("h00")) #[BoringUtilsSpec.scala 32:19]
// assert not shown
And the following Verilog:
module BoringUtilsSpecBoringInverter(
input clock,
input reset
);
wire _T; // #[BoringUtilsSpec.scala 30:13]
wire notA; // #[BoringUtilsSpec.scala 27:20 BoringUtilsSpec.scala 30:10]
wire _T_3; // #[BoringUtilsSpec.scala 32:19]
wire _T_4; // #[BoringUtilsSpec.scala 32:19]
assign _T = 1'h1; // #[BoringUtilsSpec.scala 30:13]
assign notA = 1'h1; // #[BoringUtilsSpec.scala 27:20 BoringUtilsSpec.scala 30:10]
assign _T_3 = _T | reset; // #[BoringUtilsSpec.scala 32:19]
assign _T_4 = _T_3 == 1'h0; // #[BoringUtilsSpec.scala 32:19]
// assert not shown
endmodule
Caveats
Generator Bootcamp
If you are running this in the Chisel Bootcamp Jupyter Notebook or through an sbt console/REPL, the source locators may not make as much sense as there really isn't a file here with lines.
Difference with Annotation
These source locators are not Annotations, in case anyone has come across that name.
Annotations are metadata associated with circuit components. Source locators (which map to Info in the FIRRTL IR) are associated with specific statements in some source file. Under the hood they're just strings that get generated and then copied around. There is no guarantee that source locators will be preserved---they may be changed or deleted arbitrarily. Conversely, Annotations are preserved and renamed across transformations and have strong guarantees on how they behave.
Consequently, do not rely on source locators for anything other than an aid if you need to debug the Chisel or FIRRTL compiler stages.

List of all instances created by a module

I have a number of module invocations that look similar to this
1 module "gcpue4a1" {
2 source = "../../../modules/pods"
3
4 }
where the module is creating instances, DNS records, etc.
locals {
gateway_name = "gateway-${var.network_zone}-${var.environment}-1"
}
resource "google_compute_instance" "gateway" {
name = "${local.gateway_name}"
machine_type = "n1-standard-8"
zone = "${var.zone}"
allow_stopping_for_update = true
}
How can I iterate over a list of all instances that have been created through this module. Can I do it with instance tags or labels?
In the end what I want is to be able to iterate over a list to export to an ansible inventory file. But I'm just not sure how I do this when my resources are encapsulated in modules.
With terraform show I can clearly see the structure of the variables.
➜ gcp-us-east4 git:(integration) ✗ terraform show | grep google_compute_instance.gateway -n1
640- zone = us-east4-a
641:module.screencast-gcp-pod-gcpue4a1-food.google_compute_instance.gateway:
642- id = gateway-gcpue4a1-food-1
--
--
991- zone = us-east4-a
992:module.screencast-gcp-pod-gcpue4a2-food.google_compute_instance.gateway:
993- id = gateway-gcpue4a2-food-1
--
--
1342- zone = us-east4-a
1343:module.screencast-gcp-pod-gcpue4a3-food.google_compute_instance.gateway:
1344- id = gateway-gcpue4a3-food-1
--
--
1693- zone = us-east4-a
1694:module.screencast-gcp-pod-gcpue4a4-food.google_compute_instance.gateway:
1695- id = gateway-gcpue4a4-food-1
The etcd inventory piece works just fine when I explicitly say which node I want. The overall inventory piece below it does not and I'm not sure how to fix it.
10 ##Create ETCD Inventory
11 provisioner "local-exec" {
12 command = "echo \"\n[etcd]\n${google_compute_instance.k8s-master.name} ansible_s sh_host=${google_compute_instance.k8s-master.network_interface.0.address}\" >> kubesp ray-inventory"
13 }
14
15 ##Create Nodes Inventory
16 provisioner "local-exec" {
17 command = "echo \"\n[kube-node]\" >> kubespray-inventory"
18 }
19 # provisioner "local-exec" {
20 # command = "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", google_compu te_instance.gateway.*.name, google_compute_instance.gateway.*.network_interface.0.add ress))}\" >> kubespray-inventory"
21 # }
➜ gcp-us-east4 git:(integration) ✗ terraform apply
Error: resource 'null_resource.ansible-provision' provisioner local-exec (#4): unknown resource 'google_compute_instance.gateway' referenced in variable google_compute_instance.gateway.*.id
you can make sure each module adds a label that matches the module
and you can then use gcloud compute instances list and use a filter to only show the ones with the specific lablel.

Assign puppet Hash to hieradata yaml

I want to assign a hash variable from puppet to a hiera data structure but i only get a string.
Here is a example to illustrate, what I want. Finaly I don't want to access a fact.
1 ---
2 filesystems:
3 - partitions: "%{::partitions}"
And here is my debug code:
1 $filesystemsarray = lookup('filesystems',Array,'deep',[])
2 $filesystems = $filesystemsarray.map | $fs | {
3 notice("fs: ${fs['partitions']}")
4 }
5
6 notice("sda1: ${filesystemsarray[0]['partitions']['/dev/sda1']}")
The map leads to the following output:
Notice: Scope(Class[Profile::App::Kms]): fs: {"/dev/mapper/localhost--vg-root"=>{"filesystem"=>"ext4", "mount"=>"/", "size"=>"19.02 GiB", "size_bytes"=>20422066176, "uuid"=>"02e2ba2c-2ee4-411d-ac63-fc963c8026b4"}, "/dev/mapper/localhost--vg-swap_1"=>{"filesystem"=>"swap", "size"=>"512.00 MiB", "size_bytes"=>536870912, "uuid"=>"95ba4b2a-7434-48fd-9331-66443c752a9e"}, "/dev/sda1"=>{"filesystem"=>"ext2", "mount"=>"/boot", "partuuid"=>"de90a5ed-01", "size"=>"487.00 MiB", "size_bytes"=>510656512, "uuid"=>"398f2ab6-a7e8-4983-bd81-db03984fbd0e"}, "/dev/sda2"=>{"size"=>"1.00 KiB", "size_bytes"=>1024}, "/dev/sda5"=>{"filesystem"=>"LVM2_member", "partuuid"=>"de90a5ed-05", "size"=>"19.52 GiB", "size_bytes"=>20961034240, "uuid"=>"wLKRQm-9bdn-mHA8-M8bE-NL76-Gmas-L7Gp0J"}}
Seem to be a Hash as expected but the notice in Line 6 leads to:
Error: Evaluation Error: A substring operation does not accept a String as a character index. Expected an Integer at ...
What's my fault?

Parse txt file with shell

I have a txt file containing the output from several commands executed on a networking equipment. I wanted to parse this txt file so i can sort and print on an HTML page.
What is the best/easiest way to do this? Export every command to an array and then print array with sort on the HTML code?
Commands are between lines and they're tabular data. example:
*********************************************************************
# command 1
*********************************************************************
Object column1 column2 Total
-------------------------------------------------------------------
object 1 526 9484 10010
object 2 2 10008 10010
Object 3 0 20000 20000
*********************************************************************
# command 2
*********************************************************************
(... tabular data ...)
Can someone suggest any code or file where see how to make this work?
Thanks!
This can be easily done in Python with this example code:
f = open('input.txt')
rulers = 0
table = []
for line in f.readlines():
if '****' in line:
rulers += 1
if rulers == 2:
table = []
elif rulers > 2:
print(table)
rulers = 0
continue
if line == '\n' or '----' in line or line.startswith('#'):
continue
table.append(line.split())
print(table)
It just prints list of lists of the tabular values. But it can be formatted to whatever HTML or another format you need.
Import into your spreadsheet software. Export to HTML from there, and modify as needed.

How to convert data from a custom format to CSV?

I have file that the content of file is as bellow, I have only output two records here but there is around 1000 record in single file:
Record type : GR
address : 62.5.196
ID : 1926089329
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
inID : 101
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:51:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
--------------------------------------------------------------------
Record type : GR
address : 61.5.196
ID : 1926089327
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
intID : 100
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:55:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
--------------------------------------------------------------------
Record type : GR
address : 63.5.196
ID : 1926089328
time : Sun Aug 10 09:53:47 2014
Time zone : + 16200 seconds
address [1] : 61.5.196
PN ID : 412 1
---------- Container #1 (start) -------
intID : 100
---------- Container #1 (end) -------
timerecorded: Sun Aug 10 09:55:47 2014
Uplink data volume : 502838
Downlink data volume : 3133869
Change condition : Record closed
my Goal is to convert this to CSV or txt file like bellow
Record type| address |ID | time | Time zone| address [1] | PN ID
GR |61.5.196 |1926089329 |Sun Aug 10 09:53:47 2014 |+ 16200 seconds |61.5.196 |412 1
any guide would be great on how you think would be best way to start this, the sample that I provided I think will give the clear idea but in words I would want to read the header of each record once and put their data under the out put header.
thanks for your time and help or suggestion
What you're doing is creating an Extract/Transform script (the ET part of an ETL). I don't know which language you're intending to use, but essentially any language can be used. Personally, unless this is a massive file, I'd recommend Python as it's easy to grok and easy to write with the included csv module.
First, you need to understand the format thoroughly.
How are records separated?
How are fields separated?
Are there any fields that are optional?
If so, are the optional fields important, or do they need to be discarded?
Unfortunately, this is all headwork: there's no magical code solution to make this easier. Then, once you have figured out the format, you'll want to start writing code. This is essentially a series of data transformations:
Read the file.
Split it into records.
For each record, transform the fields into an appropriate data structure.
Serialize the data structure into the CSV.
If your file is larger than memory, this can become more complicated; instead of reading and then splitting, for example, you may want to read the file sequentially and create a Record object each time the record delimiter is detected. If your file is even larger, you might want to use a language with better multithreading capabilities to handle the transformation in parallel; but those are more advanced than it sounds like you need to go at the moment.
This is a simple PHP script that will read a text file containing your data and write a csv file with the results. If you are on a system which has command line PHP installed, just save it to a file in some directory, copy your data file next to it renaming it to "your_data_file.txt" and call "php whatever_you_named_the_script.php" on the command line from that directory.
<?php
$text = file_get_contents("your_data_file.txt");
$matches;
preg_match_all("/Record type[\s\v]*:[\s\v]*(.+?)address[\s\v]*:[\s\v]*(.+?)ID[\s\v]*:[\s\v]*(.+?)time[\s\v]*:[\s\v]*(.+?)Time zone[\s\v]*:[\s\v]*(.+?)address \[1\][\s\v]*:[\s\v]*(.+?)PN ID[\s\v]*:[\s\v]*(.+?)/su", $text, $matches, PREG_SET_ORDER);
$csv_file = fopen("your_csv_file.csv", "w");
if($csv_file) {
if(fputcsv($csv_file, array("Record type","address","ID","time","Time zone","address [1]","PN ID"), "|") === FALSE) {
echo "could not write headers to csv file\n";
}
foreach($matches as $match) {
$clean_values = array();
for($i=1;$i<8;$i++) {
$clean_values[] = trim($match[$i]);
}
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
}
fclose($csv_file);
} else {
die("could not open csv file\n");
}
This script assumes that your data records are always formatted similar to the examples you have posted and that all values are always present. If the data file may have exceptions to those rules, the script probably has to be adapted accordingly. But it should give you an idea of how this can be done.
Update
Adapted the script to deal with the full format provided in the updated question. The regular expression now matches single data lines (extracting their values) as well as the record separator made up of dashes. The loop has changed a bit and does now fill up a buffer array field by field until a record separator is encountered.
<?php
$text = file_get_contents("your_data_file.txt");
// this will match whole lines
// only if they either start with an alpha-num character
// or are completely made of dashes (record separator)
// it also extracts the values of data lines one by one
$regExp = '/(^\s*[a-zA-Z0-9][^:]*:(.*)$|^-+$)/m';
$matches;
preg_match_all($regExp, $text, $matches, PREG_SET_ORDER);
$csv_file = fopen("your_csv_file.csv", "w");
if($csv_file) {
// in case the number or order of fields changes, adapt this array as well
$column_headers = array(
"Record type",
"address",
"ID",
"time",
"Time zone",
"address [1]",
"PN ID",
"inID",
"timerecorded",
"Uplink data volume",
"Downlink data volume",
"Change condition"
);
if(fputcsv($csv_file, $column_headers, "|") === FALSE) {
echo "could not write headers to csv file\n";
}
$clean_values = array();
foreach($matches as $match) {
// first entry will contain the whole line
// remove surrounding whitespace
$whole_line = trim($match[0]);
if(strpos($whole_line, '-') !== 0) {
// this match starts with something else than -
// so it must be a data field, store the extracted value
$clean_values[] = trim($match[2]);
} else {
// this match is a record separator, write csv line and reset buffer
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
$clean_values = array();
}
}
if(!empty($clean_values)) {
// there was no record separator at the end of the file
// write the last entry that is still in the buffer
if(fputcsv($csv_file, $clean_values, "|") === FALSE) {
echo "could not write data to csv file\n";
}
}
fclose($csv_file);
} else {
die("could not open csv file\n");
}
Doing the data extraction using regular expressions is one possible method mostly useful for simple data formats with a clear structure and no surprises. As syrion pointed out in his answer, things can get much more complicated. In that case you might need to write a more sophisticated script than this one.