Script namespace support in Mule ESB - esb

Which jar do I have to add in build path for the scripting schema, as I am getting the error below:
org.xml.sax.SAXParseException: cos-ct-extends.1.4.3.2.2.1.b:
The content type of a derived type and that of its base must both be mixed or both be element-only.
Type 'scriptType' is mixed, but its base type is not
while using this namespace declaration:
"http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/3.1/mule-scripting.xsd"

Use a namespace declaration declaration that is consistent with your Mule version.

Related

Purpose of "resolveJsonModule"?

The setting I am referencing is shown in the snippet bellow
{
"compilerOptions": {
"resolveJsonModule": true,
}
}
I don't really understand why TS language engineers would add a flag for "resolveJsonModule"? Either an environment supports resolving JSON as module via an import statement (or require() method), or the environment doesn't. Why bother with the extra complexity?
Context
Historically, Node has included a specialized JSON loader (unrelated to ECMA standards) to allow importing JSON data only in CommonJS mode.
Standardized importing of anything at all (ES modules) is only a relatively recent phenomenon in ECMAScript. Importing text files containing valid JSON, parsed as native JS data ("importing JSON") is described in a proposal that is still only in stage 3.
However, there has been recent movement in regard to implementation of the above mentioned proposal:
V8 implemented it in June (Chrome 91+)
TypeScipt v4.5.0 implemented it in November
Deno v1.17.0 implemented it in December
Node LTS v16.14.0 implemented it last Tuesday (behind a CLI flag --experimental-json-modules)
TypeScript
TypeScript is a static type-checker, but also a compiler (technically a transpiler), and transforms your TS source code syntax into a syntax that is valid JavaScript for the runtime environment you have specified in your TSConfig. Because there are different runtime environments with different capabilities, the way that you configure the compiler affects the transformed JavaScript that is emitted. In regard to defaults, the compiler uses an algorithmic logic to determine settings. (I can't summarize that here: you honestly have to read the entire reference in order to understand it.) Because loading of JSON data has been a non-standard, specialized operation until extremely recently, it has not been a default.
Alternatives
All JS runtimes offer alternatives to an import statment for importing of textual JSON data (which can then be parsed using JSON.parse), and none of them require configuring the compiler in the ways that you asked about:
Note: the data parsed from the JSON strings imported using these methods will not participate in the "automatic" type inference capabilities of the compiler module graph because they aren't part of the compilation graph: so they'll be typed as any (or possibly unknown in an extremely strict configuration).
Browser and Deno: window.fetch
Deno: Deno.readTextFile
Node: fs.readFile
Additionally, because all JSON (JavaScript Object Notation) is valid JS, you can simply prepend the data in your JSON file with export default , and then save the file as data.js instead of data.json, and then import it as a standard module: import {default as data} from './data.js';.
Final notes about inferred types:
I prefer to audit the JSON I'm importing and use my own manually-written types (written either by myself or someone else: imported from a module/declaration file) for the data, rather than relying on the compiler's inferred types from import statements (which I have found to be too narrow on many occasions), by assigning the parsed JSON data to a new variable using a type assertion.

Cast JSON to PowerShell with proper Auto-Completion (IntelliSense)

Is there a way to geth autocompletion with IntelliSense in Visual Studio Code work properly, when I read a json file and cast it to a PowerShell like:
$config = Get-Content "SOME_PATH" | ConvertFrom-Json
$config.attribute1
The problem is that the file needs to be in memory before it can get the structure off the json file and propose attributes.
If I get the code out and execute it in the powershell terminal and then go back to the code editor, the autocompletion works fine.
Currently IntelliSense doesn't do this.
Some of the reasons are because when you are coding, often times the specified file could not exist, be a different test version than expected, the file could be huge, malformed, etc. By default, it is sometimes safer to just not do this.
By running it in the terminal and loading it in memory, you are explicitly telling IntelliSense what you are using, and then it now "knows" about the object, and can then properly suggest the correct properties and attributes.
As #mklement0 suggests, using the keyboard shortcut F8 will conveniently execute the current line/selection in the integrated terminal, which would load the object into memory, and allow you to use IntelliSense in the editor.
To complement HAL9256's helpful answer:
First, some background information; find a working solution in the bottom section.
IntelliSense for variables in Visual Studio Code works if their type is either:
explicitly declared (e.g., [datetime] $var = ...)
or can be inferred from an assigned value.
If an assignment is based on a command (cmdlet, function, script) call, the type can only be inferred from commands with explicitly defined output type(s):
many, but by no means all, cmdlets do declare their output types
functions and scripts must use the [OutputType(<type>)] attribute.
Additionally, with the nondescript [pscustomobject] type returned by ConvertFrom-Json - which has no inherent properties, only those you add on demand; a "property bag" - you only get IntelliSense:
if a variable was assigned from a custom-object literal (e.g., $var = [pscustomobject] #{ one = 1; two = 2 })
if you cast a custom object to a specific type, assuming instances of that type can be constructed from the custom object's properties, something that PowerShell makes easy - see this answer.
Solution with a custom class (PSv5+)
Visual Studio Code's IntelliSense (via the PowerShell Extension) does recognize the members of instances of PS custom classes defined via the PSv5+ class statement.
You can therefore use a custom class to mirror the structure of the JSON object you're loading and convert the [pscustomobject] "property bags" returned by ConvertFrom-Json to an instance of that class by way of a cast.
Note: The inherent limitation of this approach is that your class must anticipate all property names that the underlying JSON objects contain, and the two must be kept in sync; otherwise:
if a property name changes on the JSON side, your code will break if the class definition isn't updated accordingly.
if new properties are added on the JSON side, these will be inaccessible unless the class definition is updated accordingly.
class definitions can either be:
directly embedded in a script
imported from modules via the using module statement (note that using Import-Module does not load a module's classes).
To implement the solution at hand, you can use a class definition in one of two ways:
(a) Define a class directly in your script that matches the structure of the JSON objects, and cast the [pscustomobject] instance returned from ConvertFrom-Json to that type; a variable assigned this way supports IntelliSense.
(b) Wrap the JSON-loading functionality in a module that performs the above inside the module, and passes the class instance out from a function that declares its [OutputObject()] to be of that type; code that imports that module with using module will then get IntelliSense for variables that capture the output from that function.
A simple demonstration of (a):
# Define a class whose properties mirror the underlying JSON.
class Config {
$foo
$bar
}
# Load the JSON and cast the resulting [pscustomobject] to the class.
# Note: This cast only works if the JSON object's set of properties
# is either the same as that of the [Config] type or a subset of it.
[Config] $config = '{ "foo": "bar", "bar": 42 }' | ConvertFrom-Json
# Variable $config supports IntelliSense, because its is now known
# as type Config.
$config. # shows list of properties

What is json schema equivalent of targetNamespace?

In any xml file i can say what namespace I refer to using xmlns-attributes to describe the namespace. It is well descibed here: What does "xmlns" in XML mean?
Then I can use a xml schema having a target namespace so that everyone knows that the schema describes that namespace. One question about that is found here: Why do we need targetNamespace?
Using json-schema we can define schemas for json documents. My mental model is that this is roughly equivalent to having a xsd file.
Now, how do I reference the schema in a json object? I can reference a schema using $schema attribute, but how do I declare the name of the schema i develop myself? I dont understand the equivalent of targetNamespace
Researching for writing the question I found the answer. The closest thing of a targetNamespace is the $id attribute. The standard states...
The "$id" keyword defines a URI for the schema, and the base URI that
other URI references within the schema are resolved against. A
subschema's "$id" is resolved against the base URI of its parent
schema. If no parent sets an explicit base with "$id", the base URI is
that of the entire document, as determined per RFC 3986 section 5
[RFC3986].
... which is kind of the mirror image of the leading text for $schema...
The "$schema" keyword is both used as a JSON Schema version identifier
and the location of a resource which is itself a JSON Schema, which
describes any schema written for this particular version. The value
of this keyword MUST be a URI [RFC3986] (containing a scheme) and this
URI MUST be normalized. The current schema MUST be valid against the
meta-schema identified by this URI.
so it is essentially the same thing.
Some things to note, however:
a) you use $schema in a schema to define what schema should be used for defining your own custom schema. It is not stated in the spec that $schema in any kind of object should indicate validation for a schema.
b) You may define in your schema that $schema should be an indication about what schema to use for validation.
c) there are other ways to indicate the schema for data. One such example is using content-type in http headers. Another is to use link http headers.
d) vscode and visual studio both interpret $schema as a reference to a schema for use in validation
The issue has been discussed at the github repo for the spec.
https://github.com/json-schema/json-schema/issues/235
https://github.com/json-schema/json-schema/issues/220#issuecomment-209452992

Biztalk 2013: Setting different namespace on schema elements

I'm developing a BizTalk application to query a number of web services that have been written and maintained by a third party, and I'm having some trouble getting the namespaces right on the Schemas.
Basically, I can't consume the wsdl to automatically generate the schemas because the namespaces and element names are all wrong within the generated schemas (due lazy C# wsdl generation), so I'm having to write them from scratch. This would be fine, but the Web Service endpoints are requiring that the elements within the schema all be qualified with specific namespaces, and none of them match the namespace of the overall schema.
I have figured out how to import other namespaces/schemas into my schema, but I can't figure out how to change the namespace of the elements to anything but the default. Does anyone know how to do this?
For example, the Schema root has to have a namespace of "http:/tempuri.org/", but one of the elements requires the namespace "http://schemas.datacontract.org/2004/07/ReadService.DTO.Inbound.Supplier", but within BizTalk, I can't edit the namespace of that element to change it.
The body of one of the requests looks like this:
<tem:GetSupplierIdWithExternalId>
<tem:request>
<com:Header>
<com1:Username></com1:Username>
<com1:Locale></com1:Locale>
</com:Header>
<read:ExternalSupplierId></read:ExternalSupplierId>
</tem:request>
</tem:GetSupplierIdWithExternalId>
"tem" in this case is http://tempuri.org/". "com", "com1" and "read" are all different namespaces, which, as Gruff has pointed out, are all default namespaces for WCF projects.
Generating from WSDL in Biztalk creates 2 issues:
The default namespace applied to the root note is not tempuri.org (as it recognises this as a default), it's the standard Biztalk http://..Folder.SchemaName namespace. Changing this to tempuri.org causes a cascade of errors that have to be fixed, and it doesn't resolve the more major issue which is:
Because of the way the WCF functions the WSDL has been generated from are written, the major element names (GetSupplierIdWithExternalId above) are all named incorrectly - in most cases, something like "GetSupplierIdWithExternalIdRequest", because that's the name of the function that schema is generate from. Again it's due to lazy programming on the endpoints, because the name of the element isn't being properly defined, it's just assumed by the generation process.
If I try and create a single flat file schema, I can only define a single namespace for the whole file, and if I set that to tempuri.org I get:
<ns0:GetSupplierWithExternalId xmlns:ns0="http://tempuri.org/">
<Header>
<Username>Username_0</Username>
<Locale>Locale_0</Locale>
</Header>
<ExternalSupplierId>ExternalSupplierId_0</ExternalSupplierId>
</ns0:GetSupplierWithExternalId>
...which fails the a SOAP request because the namespaces on the internal elements aren't correct.
Thanks in advance!
You will need to define the element with the namespace of "http://schemas.datacontract.org/2004/07/ReadService.DTO.Inbound.Supplier" in its own schema file, and import it into the schema root and compose the root that way. The element will keep the namespace it was defined as.
Looking at the namespace "http://schemas.datacontract.org/2004/07/ReadService.DTO.Inbound.Supplier", it seems it is the default namespace that WCF gives the data contract because it was not explicitly defined. (The CLR namespace of the class is ReadService.DTO.Inbound.Supplier) When the DataContractSerializer serializes the message when sending the request, it will serialize it with that namespace. You should not try and change it in the BizTalk schema, otherwise there will be a schema mismatch.
UPDATE:
In your update you mention 2 issues when generating the schema from the WSDL.
Can you paste a screenshot of this?
Are you sure GetSupplierIdWithExternalIdRequest is incorrect? If you search in the WSDL for that term, can you find it?
The operation's request and response wrappers typically get suffixed with -Request and -Response, so this might be perfectly correct.

Unable to use "as JSON" after upgrading to grails 2.1.1 from grails 1.3.4

I'm in the process of upgrading a grails plugin from 1.3.4 to grails 2.1.1. After upgrading I now have an integration test that fails that was not failing before. It fails on using the "as JSON" (grails.converters.JSON).
#Test
public void testConvertCollectionOfEnvironmentSettingsToJSON() {
EnvironmentSetting setting = configurationService.getEnvironmentSetting('ENFORCE_SCHEMA_INSTANCE_RULE')
def jsonSetting = setting as JSON //exception thrown here
def s = jsonSetting as String
assertNotNull jsonSetting
}
The exception and stacktrace:
org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object 'com.company.ipsvc.configuration.domain.EnvironmentSettingAllRevs#48c12420' with class 'com.company.ipsvc.configuration.domain.EnvironmentSettingAllRevs' to class 'grails.converters.JSON'
at com.company.ipsvc.configuration.converter.json.basic.BasicEnvironmentSettingJSONIntegrationTests.testConvertCollectionOfEnvironmentSettingsToJSON(BasicEnvironmentSettingJSONIntegrationTests.groovy:28)
I am able to use encodeAsJSON() successfully. I also have the same issue with as XML.
I think converters (as JSON syntax) will only work on domain objects and collections by default.
To convert arbitrary objects you should use the encodeAsJSON() converter, I believe. Or use an object marshaller, where you tell the converter how to deal with your object.
The docs aren't very clear on this though..
See:
http://grails.org/Converters+Reference (object marshalling section at bottom)
http://grails.org/doc/latest/ref/Plug-ins/codecs.html
But I note that http://grails.org/doc/latest/api/grails/converters/JSON.html#JSON%28java.lang.Object%29 says that the object converts POGOs.. Maybe it means if you have a marshaller?
I did find this reference too:
Notice that the ‘as’ operator is not overloaded for plain objects ...
Domain objects can use the ‘as’ operator to cast an object to JSON, the same as a collection. So unlike POGOs, where they must be massaged into a list or have encodeAsJSON explictly called ...
http://manbuildswebsite.com/2010/02/08/rendering-json-in-grails-part-2-plain-old-groovy-objects-and-domain-objects/
Which seems to describe the situation.
For non-Domain objects, we found that this would crop up when running tests... the solution for us was to use new JSON:
render new JSON( obj )
This would allow the test to work, and the code does the same thing (essentially)
Ran into a similar issue that broke unit test using grails 2.2.1 . At issue was a straight obj as JSON conversion attempt. But this was interpreted as type casting instead.
The workaround is to stuff your obj to be converted into a map like this [data:obj] as JSON