Cast JSON to PowerShell with proper Auto-Completion (IntelliSense) - json

Is there a way to geth autocompletion with IntelliSense in Visual Studio Code work properly, when I read a json file and cast it to a PowerShell like:
$config = Get-Content "SOME_PATH" | ConvertFrom-Json
$config.attribute1
The problem is that the file needs to be in memory before it can get the structure off the json file and propose attributes.
If I get the code out and execute it in the powershell terminal and then go back to the code editor, the autocompletion works fine.

Currently IntelliSense doesn't do this.
Some of the reasons are because when you are coding, often times the specified file could not exist, be a different test version than expected, the file could be huge, malformed, etc. By default, it is sometimes safer to just not do this.
By running it in the terminal and loading it in memory, you are explicitly telling IntelliSense what you are using, and then it now "knows" about the object, and can then properly suggest the correct properties and attributes.
As #mklement0 suggests, using the keyboard shortcut F8 will conveniently execute the current line/selection in the integrated terminal, which would load the object into memory, and allow you to use IntelliSense in the editor.

To complement HAL9256's helpful answer:
First, some background information; find a working solution in the bottom section.
IntelliSense for variables in Visual Studio Code works if their type is either:
explicitly declared (e.g., [datetime] $var = ...)
or can be inferred from an assigned value.
If an assignment is based on a command (cmdlet, function, script) call, the type can only be inferred from commands with explicitly defined output type(s):
many, but by no means all, cmdlets do declare their output types
functions and scripts must use the [OutputType(<type>)] attribute.
Additionally, with the nondescript [pscustomobject] type returned by ConvertFrom-Json - which has no inherent properties, only those you add on demand; a "property bag" - you only get IntelliSense:
if a variable was assigned from a custom-object literal (e.g., $var = [pscustomobject] #{ one = 1; two = 2 })
if you cast a custom object to a specific type, assuming instances of that type can be constructed from the custom object's properties, something that PowerShell makes easy - see this answer.
Solution with a custom class (PSv5+)
Visual Studio Code's IntelliSense (via the PowerShell Extension) does recognize the members of instances of PS custom classes defined via the PSv5+ class statement.
You can therefore use a custom class to mirror the structure of the JSON object you're loading and convert the [pscustomobject] "property bags" returned by ConvertFrom-Json to an instance of that class by way of a cast.
Note: The inherent limitation of this approach is that your class must anticipate all property names that the underlying JSON objects contain, and the two must be kept in sync; otherwise:
if a property name changes on the JSON side, your code will break if the class definition isn't updated accordingly.
if new properties are added on the JSON side, these will be inaccessible unless the class definition is updated accordingly.
class definitions can either be:
directly embedded in a script
imported from modules via the using module statement (note that using Import-Module does not load a module's classes).
To implement the solution at hand, you can use a class definition in one of two ways:
(a) Define a class directly in your script that matches the structure of the JSON objects, and cast the [pscustomobject] instance returned from ConvertFrom-Json to that type; a variable assigned this way supports IntelliSense.
(b) Wrap the JSON-loading functionality in a module that performs the above inside the module, and passes the class instance out from a function that declares its [OutputObject()] to be of that type; code that imports that module with using module will then get IntelliSense for variables that capture the output from that function.
A simple demonstration of (a):
# Define a class whose properties mirror the underlying JSON.
class Config {
$foo
$bar
}
# Load the JSON and cast the resulting [pscustomobject] to the class.
# Note: This cast only works if the JSON object's set of properties
# is either the same as that of the [Config] type or a subset of it.
[Config] $config = '{ "foo": "bar", "bar": 42 }' | ConvertFrom-Json
# Variable $config supports IntelliSense, because its is now known
# as type Config.
$config. # shows list of properties

Related

Use object keys as type in JSON Schema

Say I want to validate a YAML file against a JSON schema in Intellij IDEA. The file's structure would be like:
foo:
command: touch /tmp/a.txt # I know I don't need this but it's an example
bar:
command: echo "Hello World!" > /tmp/a.txt
baz:
command: cat /tmp/a.txt
dependencies:
- foo
- bar
So the property names can be any string, but the dependencies should only be keys/property names of the root object. Ideally I would specify an enum, but this question suggests it's not possible Use object property keys as enum in JSON schema (unless the answer is obsolete).
Still, I have noticed that when you write a schema in Intellij and you add a "required" = [...] it autocompletes the required fields with the property names of the "property" object (even though it doesn't use them to validate, but close enough for my purpose). I have checked out the schema for it http://json-schema.org/draft-07/schema# but haven't been able to understand how it does that.
Is there a way that I can define my schema so Intellij autocompletes based on another properties' keys like it does when you define a schema?
There is nothing in the schema itself that indicates possible values from data. There's actually no requirement that items in the required array also be defined in properties.
This sort of functionality is defined by the IDE only.
IntelliJ IDEA documents the ability to add custom schemas:
Besides schemas from JSON Schema Store, IntelliJ IDEA lets you
configure and use custom schemas from other storages. You can download
the required schema and store it under the project root or specify the
URL of the resource so IntelliJ IDEA can download the schema
automatically.
To configure a custom JSON Schema:
In the Settings/Preferences dialog ⌘,, go to Languages and Frameworks
| Schemas and DTDs | JSON Schema Mappings.
https://www.jetbrains.com/help/idea/json.html#ws_json_schema_add_custom
It also details later how to make the intelesense provide a rich preview:
Using HTML descriptions in JSON schema #
By default, IntelliJ IDEA escapes HTML characters when displaying
documentation for JSON schema definitions in documentation popups. To
get nice looking documentation with rich HTML markup, store the HTML
description in the x-intellij-html-description extension property
instead of description.
https://www.jetbrains.com/help/idea/json.html#ws_json_show_doc_in_html
However,
autocompletes based on another properties' keys
sounds like custom functionality specifically designed for writing JSON Schema. JSON Schema itself cannot reference data dynamically like that (which I assume is what you were thinking).

How to extend JSON serialization (JsonConverter) with TypeNameHandling.Auto, WITHOUT adding $type manually?

I need to extend default serialization of any object (that implements a specific interface) with, let's say for simplicity, additional properties in the generated JSON (and then be able to load this correctly).
My problem is that I've got TypeNameHandling set to Auto and if I create my custom JsonConverter (Newtonsoft.Json) and even invoke the original serialization (with the original supplied serializer by disabling my one for that time to avoid a loop) I am not getting the "$type" property generated automatically (which is required for proper deserialization). It's not automatically added by my parent and I cannot force the serializer to do it (reason below).
I am also not able to generate it myself, because I don't have any access to my context, i.e. the Type of the property that contains the object instance that I'm serializing, which states the expected type (to be compared with the actual one, to decide whether the $type should be generated or not). I can only generate it always. Is there any way to resolve this?
I want to avoid replacing the default serialization (only to extend it) and to keep the Auto TypeNameHandling behavior.

PowerShell System.Xml.Linq Unable to Parse XML to HTML

I am currently trying to use to use this script provided to format an HTML table (https://gallery.technet.microsoft.com/scriptcenter/PowerShell-HTML-Notificatio-e1c5759d#content)
I managed to get most of the functions to work but cant get the Add-HTMLTableColor function to work.
The code I am using is the following:
$params = #{
Column = "Used (GB)"
ScriptBlock = {[double]$args[0] -gt [double]$args[1]}}
Get-PSDrive -PSProvider FileSystem | New-HTMLTable -setAlternating $false | Add-HTMLTableColor -Argument 40 -attrValue "background-color:#FFFF99;" #params
However, it throws a Error: Namespace incorrect.
I dug into the script provided on TechNet and noticed that the Parse function of System.Xml.Linq returns on big XML node instead of separate XML node when HTML is passed on. It does state that it requires System.Xml.Linq v2 and I have only managed to find the dll for v3 and v4?
Is the incorrect version the root of the problem or is there another reason why it cant manage to parse the HTML into XML properly? Or is it another problem completely?
I tested another library that uses the System.Xml.Linq dll file from the system. It returned an error also when attempting to use the dll as well.
After replacing the Linq dll requirement with an assmebly import of the Linq functionality, both functions in the both library was working.
As such, it is determined that the Linq dll is not fully compatible with PowerShell and is unable to perfectly parse HTML to XML.
The solution is to either use custom assmebly import (such as the one in this script) or to use a different library all together.

SWIG C++ TCL : Handling pre-existing objects in memory

How do I access objects for which I don't have the string reference when
using SWIG TCL wrappers?
Basically in my program some of the objects are predefined even before
loading the the TCL shell. If writing the wrappers myself I would pass a
pointer to a object which in turn has the pointers to all the objects
created thus far. How can I achieve the same behavior through SWIG?
The simplest method would be to add static methods to the class (or some other wrapped class) that return these special instances. SWIG will then wrap the access correctly, and you'll be able to use the static method calling convention to get handles to those instances.
set foo [YourClass_specialFoo] ;# Get the special instance once
$foo bar ... ;# invoke methods on it

converting gwt shared object to json

Could someone explain to me why in GWT you cannot convert a client/shared pojo (that implements Serializable) into a JSON object without jumping through a load of hoops like using the AutoBeanFactory (e.g GWT (Client) = How to convert Object to JSON and send to Server? ) or creating javascript overlay objects (and so extends JavaScriptObject)
GWT compiles your client objects into a javascript object, so why can't it then simply convert your javascript to JSON if you ask it to?
The GWT JSON library supplied only allows you to JSONify java objects that extend JavaScriptObject
I am obviously misunderstanding something about GWT since a GWT compiles a simple java POJO into a javascript object and in javascript you can JSON.stringify it into JSON so why not in GWT?
GWT compiles your app, it doesn't just convert it. It does take advantage of the prototype object in JavaScript to build classes as it needs, usually following your class hierarchy (and any GWT classes you use), but it makes many other changes:
Optimizations:
Tightens up types - if you refer to something as List, but it can only be an ArrayList, it rewrites the type declarations. This by itself doesnt give much, but it lets other steps do better work, such as
Making methods static - if nothing ever overrides ArrayList.add, for example, this will turn any calls it can prove are to ArrayList.add into a static call, preventing the need for dynamic dispatch, and allowing the 'this' string in the final JS to be replaces with a shorter arg name. This will prevent a JS object from having a method you expect it to have.
Inline Methods - if a method is simple enough, and is called in few enough places, the compiler might remove the method entirely, since it knows all places where it is called. This will directly affect your use case.
Removes/Inlines unreferenced fields - if you read to a field but only write it once, it will assume that the original value is a constant. If you don't read it, there is no reason to assign it. Values that the compiler can't tell will ever be used don't need to be using up space in the js and time in the browser. This also will directly affect treating gwt'd Java as JS.
After these, among others, the compiler will rename fields, arguments, and types to be as small as possible - rarely will a field or argument be longer than 1 character when this is complete, since those are most frequently used and have the smallest scope, so can be reused the most often by the compiler. This too will affect trying to treat objects as JSON.
The libraries that allow you to export GWT objects as JSON do so by making some other assumption.
JavaScriptObject (JSO) isn't a real Java object, but actually represents a JavaScript instance, so you can cast back and forth at will - the JSNI you write will emerge relatively unoptimized, as the compiler can't tell if you are trying to talk to an external library.
AutoBeans are generated to assume that they should have the ability to write out JSON, so specific methods to encode objects are written in. They will be subject to the same rules as the other Java that is compiled - code that isn't used may be removed, code that is only called one way might be tightened up or inlined.
Libraries that can export JS compile in Java details into the final executable, making it bigger, but giving you the ability to treat these Java objects like JS in some limited way.
One last point, since you are talking both about JSON and Javascript - Some normal JS isn't suitable for writing out as JSON. Date objects don't have a consistent way to serialize that is recognized by JSON. Non-tree object graphs can't be serialized:
var obj = {};
obj.prop = {};
obj.prop.obj = obj;
Autobeans come with a built in checker for these circular references, and I would hope the JSO serialization does as well.