Can I use vegalite v-5 in observable notebook using JSON syntax (not using vegalite API)? - vega-lite

The require("#observablehq/vega-lite") doesn't allow us to use vegalite v-5. It uses older versions. I know import {vl} from '#vega/vega-lite-api-v5' allows us to use version-5 and write in more programmatic fashion. But, the syntax on the vegalite homepage and using API is different. So, isn't there any way to use vegalite v-5 in JSON syntax (like of examples on https://vega.github.io/vega-lite/?
Thanks!

#observablehq/vega-lite was a helper to make vega-lite easier to embed in Observable notebooks, but it's not needed anymore since we now have an official Vega-embed module:
// https://observablehq.com/#vega/hello-vega-embed
embed = require("vega-embed#6")
embed({ vega or vega-lite spec… })
Note that Vega-embed accepts both Vega and Vega-lite specifications.

Related

Pass parameters to a vega-lite spec?

I'm looking for an idiomatic way to pass parameters into a vega-lite spec via vegaEmbed(). For example, I'd like to pass the data url, so that instead of my spec containing:
"data": {"url": "filename.json"},
it contained something like:
"data": {"url": parameters["dataURL"]},
At a high level, I want to display visualizations from external applications via a url, something like http://localhost/showViz.html?data=today.json&sort=ascending so it's not just about specifying the data source, I'm looking for a general mechanism to parameterize my specs.
I tried passing something via vegaEmbed's opt argument, but couldn't work-out how to access it from the vega-lite spec (either as usermeta or not). If the opt argument is the preferred way to do something like this, then my question devolves to "how do I reference opt and/or usermeta values from a vega-lite spec?".
No, there is no standard means to parametrize inputs that is built-in to Vega or Vega-Lite.
There are third-party tools that do something similar to what you have in mind, for example Vega Kibana, which provides a templating syntax for charts.
If you want to do this using native Vega/Vega-Lite, you can always use Javascript to modify the specification before passing it to the renderer, and/or use the vega-embed patch option to provide a javascript function that will patch the Vega specification (not Vega-Lite specification) before it is rendered.

how can I override the data that is rendered on shopify .liquid with the need for using Ajax

I have a use case where I want to implement custom search functionality for a Shopify site. So instead of using getting a JSON with Ajax and making a html and replacing the html. Is there a way where I can override the search.results data that the .liquid files are using.
So when I make a search in /search?q=xyz, I want to get the data from my API, and use that data to render the product-item.liquid. This way I don't have to worry about the UI of the product-item for different themes.
Yes. You can easily do this. You would install an App in your shop, and create an endpoint you would call with your search criteria. The end point is handled by a Shopify App Proxy, that securely allows you to callback the App. You could return Liquid as results, or just JSON as you wish. It is a standard and simple pattern for you to use.
See here: https://help.shopify.com/api/tutorials/application-proxies

Strings Best Practice in Angular 2

I've been learning Angular 2, and wondering what is regarded best practice for storing strings. I use various strings throughout the application: Some are placed directly in HTML elements, e.g.,
// sample.html
<h1>This is my title</h1>
other strings are stored in component models that are bound to, e.g.,
// boundSample.html
<h1>{{myTitle}}</h1>
// boundSample.component.ts
import ...
#Component({
templateUrl: 'boundSample.html';
})
export class BoundSampleComponent {
myTitle = 'This is another title';
}
My concern is that my strings are spread throughout the application. Coming from a C#/WPF background, I'm use to keeping my strings in a single location (e.g. strings.xaml) that I can import into code and UI markup (i.e. XAML for WPF, HTML for Angular). This greatly helps with maintainability and internationalization.
Furthermore, a quick look at internationalization in angular 2 suggests using the i18n attribute and the i18n tool. This assumes that all my strings are defined in HTML, but what if I want to use some of those strings in code...
How and where can I define a single location for my strings in Angular2 such that I can access those strings in code and make use of the internationalization tools?
You can search for some tools, some of them are good and already implement the things you want to have. However if you want to do it the way you used to, just do the following:
Store the strings in the XAML / JSON / YAML / etc file where you store your strings. If you use webpack, use the proper loader which handles the stuff for you. If not, you would need to parse this file on your own.
Create a service which is able to get the info from file (in the constructor I guess) and has a function which returns you the string based on the string token.
Create a pipe which returns a string based on token.
Use the pipe in HTML and the service in the typescript files.
i18n - no problem, just pass the language to the service function / subscribe to the language-change observable in the service.
The implementation is trivial. But think twice: you can use already existing solutions.

Returning MarkLogic EVAL REST service output as JSON

I am working on a demo using MarkLogic to store emails exported from Outlook as XML, so that they stay searchable and accessible when I move away from Outlook.
I am using an AngularJS front-end calling either the native MarkLogic REST services of own REST services written in JAVA using Jersey.
MarkLogic SEARCH REST service works very well to get back a list of references to documents based on various search criteria, but I also want to display information stored inside the found documents.
I would like to avoid multiple REST calls and to get back only the needed information, so I am trying to use the EVAL REST service to run an xQuery.
It works well to get XML back (inside a multipart/mixed message) but I don't seem to be able to get JSON instead which would be much more convenient and is very easy with most other MarkLogic REST services.
I could use "json:transform-to-json()" in my xQuery or transform the XML to JSON in my JAVA code, but that does not look very elegant to me.
Is there a more efficient method to get where I am trying to go ?
First, json:transform-to-json seems plenty elegant to me. But of course it's not always the right answer.
I see three options you haven't mentioned.
server-side transforms - REST search supports server-side transforms which transform each document when you perform a bulk read by query. Those server-side transforms could generate any json you need.
search extract-document-data - this the simplest way to extract portions of documents. But it seems best if your documents are json to match your json response. Otherwise you get xml in your json response . . . unless you're ok with that.
custom search snippets - another very powerful way to customize what search returns
All of these options don't require the privileges that eval requires, which is a very good thing. Since eval allows execution of arbitrary code on your server, it requires special privileges and should be used with great care. Two other options before you use eval are (1) custom xquery installed in an http server, and (2) REST extensions.
The answers from Sam are what I would suggest. Specifically I would set a search option for search-extract-document-data (This is a search API option. If you are posting the request, then you can add the option in the XML you post back. If you are using GET, then you need to register the option ahead of time and call it. Relevant URLs to assist:
https://docs.marklogic.com/guide/rest-dev/search#id_48838
https://docs.marklogic.com/guide/search-dev/appendixa#id_44222
As for json.. ML8 will transform content. Use the accept-header or just add format=json to your results...
Example - xml which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon
...
<search:result index="1" uri="/sample/collections/1.xml" path="fn:doc("/sample/collections/1.xml")" score="34816" confidence="0.5982239" fitness="0.6966695" href="/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml" mimetype="application/xml" format="xml">
<search:snippet>
<search:match path="fn:doc("/sample/collections/1.xml")/x">
<search:highlight>watermellon</search:highlight>
</search:match>
</search:snippet>
</search:result>
...
Example - json which is what my content is stored as:
http://localhost:8000/v1/search?q=watermellon&format=json
...
"index":1,
"uri":"/sample/collections/1.xml",
"path":"fn:doc(\"/sample/collections/1.xml\")",
"score":34816,
"confidence":0.5982239,
"fitness":0.6966695,
"href":"/v1/documents?uri=%2Fsample%2Fcollections%2F1.xml",
"mimetype":"application/xml",
"format":"xml",
"matches":[
{
"path":"fn:doc(\"/sample/collections/1.xml\")/x",
"match-text":[
{
"highlight":"watermellon"
}
]
}
]
}
...
For real heavy-lifting, you can use server-side transforms as in Sam's description. One note about this. Server-side transformations are not part of the search API, but part of the REST API. Just mentioning it so you have some idea of which tool you are using in each case..

Is there a json validation framework in play based on a specified grammar

An automated system is going to feed the application[Play with Scala] with JSON's and the contract of the integration is that there would be no validation required on JSON's since it will be always deemed right. But for testing purposes when we seed the data more often than not we are not able to send the correct JSONs. We would like to validate the JSON's we receive based on a set of grammars. Is there a library that already does this. Or is there a better way to do this?
Example: Grammar for valid Json :
"header"->[String, mandatory],
"footer"->[String],
"someArray"->Array[String, mandatory],
"someArrayObject"->Array[
{
{"key1"->Int, mandatory},
{"key2"->String}
},
mandatory
]
and passing,
{
"header":"headerContent",
"footer":"footerContent",
"someArray":["str1", "str2"],
"someArrayObject"->[
{"key1":4, "key2":"someStringValue"},
{"key1":5, "key2":"someOtherStringValue"}
]
} // would pass
{
"header":"headerContent",
"footer":"footerContent",
"someArray":["str1", "str2"]
} // would notpass since someArrayObject though declared mandatory is not provided in the sample json
I think play-json will satisfy you play-json
In play-json you don't create a validator as it is, but a json transformer which is a validator in itself. The author of the framework wrote a series of blog-posts to show how to work with it: json-transformers
* Haven't noticed you use play) Play has play-json included by default.
You don't have to roll out your own DSLs. This is why we have schemas. Just like using XML schemas to validate your XML docs, you can define a JSON schema to validate your JSON objects. I had a similar requirement when building a RESTful web service using Play. I solved it by using the JSON Schema Validator library.
I have used the JSON Schema draft v3. The library supports draft v3 and draft v4. You can validate your schemas against possible JSON inputs using a web application that uses the same library. The web app is hosted here.
Also there are pretty nice examples that use the draft v4. You can check them out from here.
In Play 2, I have composed an action that takes the schema resource file name as input. This keeps away a lot of JSON validation code from the controller action itself.
#JsonValidate("user-register.json")
public static Result create() {
...
}
This way, all JSON Validation code stays in one place. Pretty neat :)