please assist me in this ,,,
I have a tcl array called all_tags ,, but the thing is i need to convert it into a javascript array in my page but i am weak when it comes to javascript .
please advise me if below is correct and if not ,,what is the right way ?
<script>
var mytags = new Array();
<%
foreach tag $all_tags {
ns_puts [subst {
mytags.push('$tag');
}]
}
%>
</script>
and afterwards is it possible to use my javascript array in a tcl proc ?
To turn data in Tcl into JSON, you want the json::write package from Tcllib. You'd use it like this to make a JSON object from a Tcl array (and a similar approach works for Tcl dictionaries):
package require json::write
set accumulate {}
foreach {key value} [array get yourArray] {
lappend accumulate $key [json::write string $value]
}
set theJsonObject [json::write object {*}$accumulate]
To turn a Tcl list into a JSON array:
package require json::write
set accumulate {}
foreach item $yourList {
lappend accumulate [json::write string $value]
}
set theJsonArray [json::write array {*}$accumulate]
Note in these two cases I've assumed that the values are all to be represented as JSON strings. If the values to embed are numbers (or true or false) you don't need to do anything special; the values as Tcl sees them work just fine as JSON literals. Embedding lists/arrays/dicts takes “recursive” use of json::write and a bit more planning — it's not automatic as Tcl and JSON have really very different concepts of types.
Related
All of the documentation and examples I have seen for the Perl JSON::XS module use a OO interface, e.g.
print JSON::XS->new->ascii()->pretty()->canonical()->encode($in);
But I don't necessarily want all those options every time, I'd prefer to send them in a hash like you can with the basic JSON module, e.g.
print to_json($in, { canonical => 1, pretty => 1, ascii => 1 } );
sending to that encode_json yields
Too many arguments for JSON::XS::encode_json
Is there any way to do that?
JSON's to_json uses JSON::XS if it's installed, so if you want a version of to_json that uses JSON::XS, simply use the one from JSON.
Or, you could recreate to_json.
sub to_json
my $encoder = JSON::XS->new();
if (#_ > 1) {
my $opts = $_[1];
for my $method (keys(%$opts)) {
$encoder->$_($opts->{$_});
}
}
return $encoder->encode($_[0]);
}
But doesn't help stop passing in the options every time. If you're encoding multiple data structures, it's best to create a single object and reuse it.
my $encoder = JSON::XS->new->ascii->pretty->canonical;
print $encoder->encode($in);
When dictionaries were first implemented and added to Tcl, why was the dict get command implemented in a way that allows an error to occur if an attempt is made to retrieve a value for a key that is not present in the dictionary?
This requires you to wrap the command in a catch statement every time you use it if you want to ensure that it is completely safe. It always seemed to me that a frequently used command like this would have some sort of exception handling built in.
It's a common design choice in Tcl (as well as some other languages). When a command like dict get (or, more commonly, open) fails, the program has to deal with it, which means it has to be alerted to the failure in some way.
The most common options is to have the failing command either
Return an out-of-domain value (such as null in languages that have it), or
Raise an exception.
(E.g. the lsearch command either returns an index value if successful and -1 if it fails (the first option). The dict get command either returns a value if successful and raises an exception if it fails (the second option).)
The first option isn't really practicable for dict get command, since there is no out-of-domain value. Any Tcl value could possibly be stored in a dictionary, so you can't look at the result of dict get and know that it has failed to find a value. The empty string is often used as a pseudo-null value in Tcl, but it's quite likely that empty strings are actual values in a dictionary.
So dict get raises an exception when it fails. It's not so bad. Exceptions have a lot of neat properties, such as taking control directly to the nearest enclosing handler regardless of how many stack levels it has to unwind.
(It's not really possible to handle all exceptions inside the command: a handler must know how to deal with the error, and dict get can't know that.)
Either way, a command that can fail needs to be wrapped in some kind of check. If the foo command is used to get a resource that might not be available and there is no sensible default, the code calling it must look either like this:
if {[set x [foo]] ne {BAD_RETURN_VALUE}} {
# use the resource
} else {
# deal with failure
}
or like this:
try {
foo
} on ok x {
# use the resource
} on error {} {
# deal with failure
}
or like this (if a predicate function predicting if foo will succeed exists):
if {[foo-will-succeed]} {
set x [foo]
# use the resource
} else {
# deal with failure
}
Which is about as much bother in each of the cases. Since out-of-domain values are rare in Tcl and error handling is so versatile, the predicate or exception strategies are usually favored.
patthoyts has already showed one way to add a error-suppressing getter function to the dict ensemble. Another relatively lightweight invocation is
set foo [try {dict get $bar xyzzy} on error {} {}]
which returns the result of the dict get call if successful and the empty string if not, and squashes any errors raised.
set foo [try {dict get $bar xyzzy} on error {} {return 42}]
This invocation sets a default return value to use on failure.
If the invocation is still bothersome, it can be made into a command:
proc dictget args {
set default {}
if {[lindex $args 0] eq {-default}} {
set args [lassign $args - default]
}
try {
dict get {*}$args
} on error {} {
set default
}
}
The synopsis for this is
dictget ?-default value? ?dictionaryValue? ?key ...?
Documentation: dict, if, proc, return, set, try
The dict command is implemented as an ensemble. This means you can very easily extend it yourself to achieve this. I like to call this dict get? and have it return an empty value if the key does not exist. We can add this new subcommand as follows:
proc ::tcl::dict::get? {dict key} {
if {[dict exists $dict $key]} {
return [dict get $dict $key]
}
return
}
namespace ensemble configure dict \
-map [linsert [namespace ensemble configure dict -map] end get? ::tcl::dict::get?]
As you can see this trivially wraps up the dict exists call with the dict get call but presents it as a builtin part of the dict command due to the ensemble update. In use it looks like this:
if {[dict get? $meta x-check-query] eq "yes"} {
... do stuff ...
}
(This can be seen in action in the Tcl test suite httpd test server code.)
I guess that is why we are provided with the dict exists command.
You might be expecting dict get to return empty string of that key element doesn't exists. But, having implementation like them will cause problem if the actual value of any key itself is an empty string.
% set demo {id {} name Dinesh}
id {} name Dinesh
% dict get $demo id
% dict get $demo age
key "age" not known in dictionary
%
Use dict exists if you want to skip catch.
In order to create an API that's consistent for strict typing languages, I need to modify all JSON to return quoted strings in place of integers without going through one-by-one and modifying underlying data.
This is how JSON is generated now:
my $json = JSON->new->allow_nonref->allow_unknown->allow_blessed->utf8;
$output = $json->encode($hash);
What would be a good way to say, "And quote every scalar within that $hash"?
Both of JSON's backends (JSON::PP and JSON::XS) base the output type on the internal storage of the value. The solution is to stringify the non-reference scalars in your data structure.
sub recursive_inplace_stringification {
my $reftype = ref($_[0]);
if (!length($reftype)) {
$_[0] = "$_[0]" if defined($_[0]);
}
elsif ($reftype eq 'ARRAY') {
recursive_inplace_stringification($_) for #{ $_[0] };
}
elsif ($reftype eq 'HASH') {
recursive_inplace_stringification($_) for values %{ $_[0] };
}
else {
die("Unsupported reference to $reftype\n");
}
}
# Convert numbers to strings.
recursive_inplace_stringification($hash);
# Convert to JSON.
my $json = JSON->new->allow_nonref->utf8->encode($hash);
If you actually need the functionality provided by allow_unknown and allow_blessed, you will need to reimplement it inside of recursive_inplace_stringification (perhaps by copying it from JSON::PP if licensing allows), or you could use the following before calling recursive_inplace_stringification:
# Convert objects to strings.
$hash = JSON->new->allow_nonref->decode(
JSON->new->allow_nonref->allow_unknown->allow_blessed->encode(
$hash));
I am using PowerShell v3 and the Windows PowerShell ISE. I have the following function that works fine:
function Get-XmlNode([xml]$XmlDocument, [string]$NodePath, [string]$NamespaceURI = "", [string]$NodeSeparatorCharacter = '.')
{
# If a Namespace URI was not given, use the Xml document's default namespace.
if ([string]::IsNullOrEmpty($NamespaceURI)) { $NamespaceURI = $XmlDocument.DocumentElement.NamespaceURI }
# In order for SelectSingleNode() to actually work, we need to use the fully qualified node path along with an Xml Namespace Manager, so set them up.
[System.Xml.XmlNamespaceManager]$xmlNsManager = New-Object System.Xml.XmlNamespaceManager($XmlDocument.NameTable)
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
[string]$fullyQualifiedNodePath = Get-FullyQualifiedXmlNodePath -NodePath $NodePath -NodeSeparatorCharacter $NodeSeparatorCharacter
# Try and get the node, then return it. Returns $null if the node was not found.
$node = $XmlDocument.SelectSingleNode($fullyQualifiedNodePath, $xmlNsManager)
return $node
}
Now, I will be creating a few similar functions, so I want to break the first 3 lines out into a new function so that I don't have to copy-paste them everywhere, so I have done this:
function Get-XmlNamespaceManager([xml]$XmlDocument, [string]$NamespaceURI = "")
{
# If a Namespace URI was not given, use the Xml document's default namespace.
if ([string]::IsNullOrEmpty($NamespaceURI)) { $NamespaceURI = $XmlDocument.DocumentElement.NamespaceURI }
# In order for SelectSingleNode() to actually work, we need to use the fully qualified node path along with an Xml Namespace Manager, so set them up.
[System.Xml.XmlNamespaceManager]$xmlNsManager = New-Object System.Xml.XmlNamespaceManager($XmlDocument.NameTable)
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
return $xmlNsManager
}
function Get-XmlNode([xml]$XmlDocument, [string]$NodePath, [string]$NamespaceURI = "", [string]$NodeSeparatorCharacter = '.')
{
[System.Xml.XmlNamespaceManager]$xmlNsManager = Get-XmlNamespaceManager -XmlDocument $XmlDocument -NamespaceURI $NamespaceURI
[string]$fullyQualifiedNodePath = Get-FullyQualifiedXmlNodePath -NodePath $NodePath -NodeSeparatorCharacter $NodeSeparatorCharacter
# Try and get the node, then return it. Returns $null if the node was not found.
$node = $XmlDocument.SelectSingleNode($fullyQualifiedNodePath, $xmlNsManager)
return $node
}
The problem is that when "return $xmlNsManager" executes the following error is thrown:
Cannot convert the "System.Object[]" value of type "System.Object[]" to type "System.Xml.XmlNamespaceManager".
So even though I have explicitly cast my $xmlNsManager variables to be of type System.Xml.XmlNamespaceManager, when it gets returned from the Get-XmlNamespaceManager function PowerShell is converting it to an Object array.
If I don't explicitly cast the value returned from the Get-XmlNamespaceManager function to System.Xml.XmlNamespaceManager, then the following error is thrown from the .SelectSingleNode() function because the wrong data type is being passed into the function's 2nd parameter.
Cannot find an overload for "SelectSingleNode" and the argument count: "2".
So for some reason PowerShell is not maintaining the data type of the return variable. I would really like to get this working from a function so that I don't have to copy-paste those 3 lines all over the place. Any suggestions are appreciated. Thanks.
What's happening is PowerShell is converting your namespace manager object to a string array.
I think it has to do with PowerShell's nature of "unrolling" collections when sending objects down the pipeline. I think PowerShell will do this for any type implementing IEnumerable (has a GetEnumerator method).
As a work around you can use the comma trick to prevent this behavior and send the object as a whole collection.
function Get-XmlNamespaceManager([xml]$XmlDocument, [string]$NamespaceURI = "")
{
...
$xmlNsManager.AddNamespace("ns", $NamespaceURI)
return ,$xmlNsManager
}
More specifically, what is happening here is that your coding habit of strongly typing $fullyQualifiedModePath is trying to turn the result of the Get (which is a list of objects) into a string.
[string]$foo
will constrain the variable $foo to only be a string, no matter what came back. In this case, your type constraint is what is subtly screwing up the return and making it Object[]
Also, looking at your code, I would personally recommend you use Select-Xml (built into V2 and later), rather than do a lot of hand-coded XML unrolling. You can do namespace queries in Select-Xml with -Namespace #{x="..."}.
When reading some JSON data structures, and then trying to Dump them using YAML::Tiny, I sometimes get the error
YAML::Tiny does not support JSON::XS::Boolean
I understand why this is the case (in particular YAML::Tiny does not support booleans, which JSON is keen to clearly distinguish from other scalars), but is there a quick hack to turn those JSON::XS::Boolean objects into plain 0's and 1's just for quick dump-to-the-screen purposes?
YAML::Tiny doesn't support objects. Unfortunately, it doesn't even have an option to just stringify all objects, which would handle JSON::XS::Boolean.
You can do that fairly easily with a recursive function, though:
use strict;
use warnings;
use 5.010; # for say
use JSON::XS qw(decode_json);
use Scalar::Util qw(blessed reftype);
use YAML::Tiny qw(Dump);
my $hash = decode_json('{ "foo": { "bar": true }, "baz": false }');
# Stringify all objects in $hash:
sub stringify_objects {
for my $val (#_) {
next unless my $ref = reftype $val;
if (blessed $val) { $val = "$val" }
elsif ($ref eq 'ARRAY') { stringify_objects(#$val) }
elsif ($ref eq 'HASH') { stringify_objects(values %$val) }
}
}
stringify_objects($hash);
say Dump $hash;
This function doesn't bother processing scalar references, because JSON won't produce them. It also doesn't check whether an object actually has overloaded stringification.
Data::Rmap doesn't work well for this because it will only visit a particular object once, no matter how many times it appears. Since the JSON::XS::Boolean objects are singletons, that means it will only find the first true and the first false. It's possible to work around that, but it requires delving into the source code to determine how keys are generated in its seen hash:
use Data::Rmap qw(rmap_ref);
use Scalar::Util qw(blessed refaddr);
# Stringify all objects in $hash:
rmap_ref { if (blessed $_) { delete $_[0]->seen->{refaddr $_};
$_ = "$_" } } $hash;
I think the recursive function is clearer, and it's not vulnerable to changes in Data::Rmap.