This is a very very (very!!!) strange problem.
I have this JSCRIPT that runs on windows XP and 7 using dos CSCRIPT in a file called testJSON.js.
if ( ! this.JSON ) WScript.Echo("JSON DOESN'T EXISTS");
And, well, the message appear, but is an unexpected behavior of JSCRIPT because JSON (as the MSDN documentation says) is one of the default object in the JSCRIPT 5.8 and my system on Windows 7 runs exactly JSCRIPT 5.8.
Now, I have temporary solved this problem (in a little complex script) by creating a new text file and MANUALLY composing a valid JSON string (and, obviously this makes everything works fine even if the system doesn't have the JSCRIPT 5.8 as requested for JSON) but I like to know two things mainly:
1st Why I can't use the JSON object even if my JSCRIPT version is the one that supports that object?
2nd I have read something about the "enabling" of the JSON (and other) unavailable object in my JSCRIPT environment, but all examples is for C# and I like to know if some equivalent code for JSCRIPT exists or not.
You can use eval() to achieve an effect similar to JSON.parse().
eval('obj = {' + JSONstring + '}');
And afterwards, obj.toString() will let you retrieve the data similar to JSON.stringify() (just without the beautify options). See this answer for an example in the wild. The point is, you can create an object from JSON text without having to load any external libraries or switch the interpreter engine.
BIG FAT WARNING!!!
This introduces a vulnerability into the workstation running your code. If you do not control the generation of the JSON you wish to parse, or if it is possible that a 3rd party might modify the JSON between its generation and its interpretation, then consider following Helen's advice. If bad things are in the JSON, it can cause your WScript to do bad things. For example, if your JSON string or file contains the following:
};
var oSH = WSH.CreateObject("wscript.shell"),
cmd = oSH.Exec("%comspec%");
WSH.Sleep(250);
cmd.StdIn.WriteLine("net user pwnd password /add");
WSH.Sleep(250);
cmd.StdIn.WriteLine("net group Administrators pwnd /add");
WSH.Sleep(250);
cmd.Terminate();
var obj = {
"objName": {
"item1": "value 1",
"item2": "value 2"
}
... then parsing it with eval will have just added a new administrator to your computer without any visual indication that it happened.
My advice is to feel free to employ eval for private or casual use; but for widespread deployment, consider including json2.js as Helen suggests. Edit: Or...
htmlfile COM object
You can import the JSON methods by invoking the htmlfile COM object and forcing it into IE9 (or higher) compatibility mode by means of a <META> tag like this:
var htmlfile = WSH.CreateObject('htmlfile'), JSON;
htmlfile.write('<meta http-equiv="x-ua-compatible" content="IE=9" />');
htmlfile.close(JSON = htmlfile.parentWindow.JSON);
With those three lines, the JSON object and methods are copied into the JScript runtime, letting you parse JSON without using eval() or downloading json2.js. You can now do stuff like this:
var pretty = JSON.stringify(JSON.parse(json), null, '\t');
WSH.Echo(pretty);
Here's a breakdown:
// load htmlfile COM object and declare empty JSON object
var htmlfile = WSH.CreateObject('htmlfile'), JSON;
// force htmlfile to load Chakra engine
htmlfile.write('<meta http-equiv="x-ua-compatible" content="IE=9" />');
// The following statement is an overloaded compound statement, a code golfing trick.
// The "JSON = htmlfile.parentWindow.JSON" statement is executed first, copying the
// htmlfile COM object's JSON object and methods into "JSON" declared above; then
// "htmlfile.close()" ignores its argument and unloads the now unneeded COM object.
htmlfile.close(JSON = htmlfile.parentWindow.JSON);
See this answer for other methods (json2.js download via XHR, InternetExplorer.Application COM object, an HTA hybrid method, and another example of htmlfile).
Why I can't use the JSON object even if my JSCRIPT version is the one that supports that object?
According to MSDN, Windows Script Host uses the JScript 5.7 feature set by default for backward compatibility. The JScript 5.8 feature set is only used in Internet Explorer in the IE8+ Standards document modes.
You have the following options:
Include json2.js in your script. See this question for options for including external scripts in JScript scripts.
Modify the registry to expose IE9's JScript engine to Windows Script Host. UPD: This solution uses IE's JScript DLLs, but doesn't activate the 5.8 feature set.
Create a JScript execution host programmatically using the Active Script interfaces and use IActiveScriptProperty::SetProperty to force the JScript 5.8 feature set (SCRIPTLANGUAGEVERSION_5_8). Here's a C++ example.
I have read something about the "enabling" of the JSON (and other) unavailable object in my JSCRIPT environment, but all examples is for C# and I like to know if some equivalent code for JSCRIPT exists or not.
Custom script execution hosts can be created only using languages with proper COM support, such as C++, C# etc. JScript can't be used for that, because, for example, it doesn't support out parameters.
JSON encode, decode without default parser: https://gist.github.com/gnh1201/e372f5de2e076dbee205a07eb4064d8d
var $ = {};
/**
* Decode JSON
*
* #param string jsonString - JSON text
*
* #return object
*/
$.json.decode = function(jsonString) {
return (new Function("return " + jsonString)());
};
/**
* Encode JSON
*
* #param object obj - Key/Value object
*
* #return string
*/
$.json.encode = function(obj) {
var items = [];
var isArray = (function(_obj) {
try {
return (_obj instanceof Array);
} catch (e) {
return false;
}
})(obj);
var _toString = function(_obj) {
try {
if(typeof(_obj) == "object") {
return $.json.encode(_obj);
} else {
var s = String(_obj).replace(/"/g, '\\"');
if(typeof(_obj) == "number" || typeof(_obj) == "boolean") {
return s;
} else {
return '"' + s + '"';
}
}
} catch (e) {
return "null";
}
};
for(var k in obj) {
var v = obj[k];
if(!isArray) {
items.push('"' + k + '":' + _toString(v));
} else {
items.push(_toString(v));
}
}
if(!isArray) {
return "{" + items.join(",") + "}";
} else {
return "[" + items.join(",") + "]";
}
};
/**
* Test JSON
*
* #param object obj - Key/Value object
*
* #return boolean
*/
$.json.test = function(obj) {
var t1 = obj;
var t2 = $.json.encode(obj);
$.echo($.json.encode(t1));
var t3 = $.json.decode(t2);
var t4 = $.json.encode(t3);
$.echo(t4);
if(t2 == t4) {
$.echo("success");
return true;
} else {
$.echo("failed");
return false;
}
};
/**
* Echo
*
* #param string txt
*
* #return void
*/
$.echo = function(txt) {
if($.isWScript()) {
WScript.Echo(txt);
} else {
try {
window.alert(txt);
} catch (e) {
console.log(txt);
}
}
};
/**
* Check if WScript
*
* #return bool
*/
$.isWScript = function() {
return typeof(WScript) !== "undefined";
}
// test your data
var t1 = {"a": 1, "b": "banana", "c": {"d": 2, "e": 3}, "f": [100, 200, "3 hundreds", {"g": 4}]};
$.json.test(t1);
Related
I want to create a csv file for 1.3M records from my marklogic db . I tried using CORB for that but it had taken more time than i expected.
My data is like this
{
"One": {
"Name": "One",
"Country": "US"
},
"Two": {
"State": "kentucky"
},
"Three": {
"Element1": "value1",
"Element2": "value2",
"Element3": "value3",
"Element4": "value4",
so on ...
}
}
Below are the my Corb modules
Selector.xqy
var total = cts.uris("", null, cts.collectionQuery("data"));
fn.insertBefore(total,0,fn.count(total))
Transform.xqy(Where i am keeping all the elements in an array )
var name = fn.tokenize(URI, ";");
const node = cts.doc(name);
var a= node.xpath("/One/*");
var b= node.xpath("/Two/*");
var c= node.xpath("/Three/*");
fn.stringJoin([a, b, c,name], " , ")
my properties file
THREAD-COUNT=16
BATCH-SIZE=1000
URIS-MODULE=selector.sjs|ADHOC
PROCESS-MODULE=transform.sjs|ADHOC
PROCESS-TASK=com.marklogic.developer.corb.ExportBatchToFileTask
EXPORT-FILE-NAME=Report.csv
PRE-BATCH-TASK=com.marklogic.developer.corb.PreBatchUpdateFileTask
EXPORT-FILE-TOP-CONTENT=Col1,col2,....col16 -- i have 16 columns
It had taken more than 1 hour for creating a csv file . And also for trying in cluster i need to configure a load balancer first. Whereas Java Client api will distribute the work among all nodes without any load balancer.
How can i implement the same in Java Client APi , i know i can trigger transform module using ServerTransform and ApplyTransformListener .
public static void main(String[] args) {
// TODO Auto-generated method stub
DatabaseClient client = DatabaseClientFactory.newClient
("localhost", pwd, "x", "x", DatabaseClientFactory.Authentication.DIGEST);
ServerTransform txform = new ServerTransform("tsm"); -- Here i am implementing same logic of above `tranform module` .
QueryManager qm = client.newQueryManager();
StructuredQueryBuilder query = qm.newStructuredQueryBuilder();
query.collection();
DataMovementManager dmm = client.newDataMovementManager();
QueryBatcher batcher = dmm.newQueryBatcher(query.collections("data"));
batcher.withBatchSize(2000)
.withThreadCount(16)
.withConsistentSnapshot()
.onUrisReady(
new ApplyTransformListener().withTransform(txform))
.onBatchSuccess(batch-> {
System.out.println(
batch.getTimestamp().getTime() +
" documents written: " +
batch.getJobWritesSoFar());
})
.onBatchFailure((batch,throwable) -> {
throwable.printStackTrace();
});
// start the job and feed input to the batcher
dmm.startJob(batcher);
batcher.awaitCompletion();
dmm.stopJob(batcher);
client.release();
}
But how can i send the csv file header like that one in CORB(i.e. EXPORT-FILE-TOP-CONTENT) . Is there any documentation for implementing CSV file ? Which class will implement that ?
Any help is appreciated
Thanks
Probably the easiest option is ml-gradle Exporting data to CSV which uses Java Client API and DMSDK under the hood.
Note that you'll probably want to install a server-side REST transform to extract only the data you want in the CSV output, rather than download the entire doc contents then extract on the Java side.
For a working example of the code required to use DMSDK and create an aggregate CSV (one CSV for all records), see ExporToWriterListenerTest.testMassExportToWriter. For the sake of SO, here's the key code snippet (with a couple a minor simplification changes, including writing column headers (untested code)):
try (FileWriter writer = new FileWriter(outputFile)) {
writer.write("uri,collection,contents");
writer.flush();
ExportToWriterListener exportListener = new ExportToWriterListener(writer)
.withRecordSuffix("\n")
.withMetadataCategory(DocumentManager.Metadata.COLLECTIONS)
.onGenerateOutput(
record -> {
String uri = record.getUri();
String collection = record.getMetadata(new DocumentMetadataHandle()).getCollections().iterator().next();
String contents = record.getContentAs(String.class);
return uri + "," + collection + "," + contents;
}
);
QueryBatcher queryJob =
moveMgr.newQueryBatcher(query)
.withThreadCount(5)
.withBatchSize(10)
.onUrisReady(exportListener)
.onQueryFailure( throwable -> throwable.printStackTrace() );
moveMgr.startJob( queryJob );
queryJob.awaitCompletion();
moveMgr.stopJob( queryJob );
}
However, unless you know your content has no double quotes, newlines, or non-ascii characters, a CSV library is recommended to make sure your output is properly escaped. To use a CSV library, you can of course use any tutorial out there for your library. You don't need to worry about thread safety because ExportToWriterListener runs your listeners in a synchronized block to prevent overlapping writes to the writer. Here's an example of using one CSV library, Jackson CsvMapper.
Please note that you don't have to use ExportToWriterListener . . . you can use it as a starting point to write your own listener. In particular, since your major concern is performance, you may want to have your listeners write to one file per thread, then post-process to combine things together. It's up to you.
I replaced PageJS routing in our application with app-location and app-route, and everything seems to be working except the query parameters. I noticed it can only read URLs like host?param1=val1#/view, not host#view?param1=val1 how PageJS used to.
Upon further digging, I discovered that this is actually an RFC standard. I find it odd that PageJS and Angular can use the nonstandard query string format.
Is there a way to use the query-params attribute of app-route to read the nonstandard query parameters for backward compatibility?
The non-standard form works in PageJS because PageJS manually parses the query string from the URL by extracting the text that follows ? and then parsing that for any hashes that need to be separated out. Angular might do something similar. On the other hand, <app-location> (and <iron-location>) uses the platform's window.location.search to fetch the query parameters.
If you need to stick with <iron-location>, you could add backward compatible support by monkey-patching <iron-location>._urlChanged(), which is responsible for parsing the URLs for <app-location>.
Monkey-patched <app-location>:
Polymer({
is: 'my-app',
ready: function() {
this._setupAppLocation();
},
_setupAppLocation: function() {
// assumes <app-location id="location">
const ironLocation = this.$.location.$$('iron-location');
if (!ironLocation) return;
ironLocation._urlChanged = function() {
this._dontUpdateUrl = true;
this.path = window.decodeURIComponent(window.location.pathname);
if (window.location.hash.includes('?')) {
const parts = window.location.hash.split('?');
this.hash = window.decodeURIComponent(parts[0].slice(1));
this.query = parts[1];
// Prepend other query parameters found in standard location
if (window.location.search) {
this.query = window.location.search.substring(1) + '&' + this.query;
}
} else {
this.query = window.location.search.substring(1);
}
this._dontUpdateUrl = false;
this._updateUrl();
};
}
});
Alternatively, you could switch to a Polymer element that supports parsing query parameters in either form out of the box. I recommend <nebula-location> (which uses QS as its query string parser).
Example usage of <nebula-location>:
<nebula-location data="{{locationData}}"></nebula-location>
<div>foo=[[locationData.queryParams.foo]]</div>
#view?foo=123
?foo=123#view
We are using ExtJS 4.2.1 and Spring MVC / Data-Rest. Anyway, the data-rest will return the following structure on a successful GET request:
{
content: [ ... objects .... ],
links: [ ... list of rel links ... ],
page: {
size: 25,
totalElements: 4,
totalPages: 1,
number: 1
}
}
So, in my proxies, I have set the totalProperty to page.totalElements. Works great.
However, when we send a PUT request, the natural (and correct, I believe) response is to send a 204 NO CONTENT. Now, if our totalProperty were set to something like total (not page.xxx) then it's fine. But ExtJS is trying to parse the page.totalElements and returning an exception because page is null.
So how can I tell ExtJS to ignore the totalProperty on a PUT request?
Ha. Looks like I found a work-around. Not sure if I like it but it works. I changed my controller as such:
return new ResponseEntity<>("OK", HttpStatus.OK);
Where as before, it was:
return new ResponseEntity(HttpStatus.OK);
ExtJS seems to not mind that. Strange.
#cbmeeks, I had a similar issue in Ext.JS 4.1.3, and I posted the fix here.
/**
* Fix for Json reader when the HTTP response is Status Code: 204 No Content and
* at least one of the pair root/totalProperty is not a direct property of the returned json
* http://www.sencha.com/forum/showthread.php?127585
* #author rgralhoz
*/
Ext.data.reader.Json.override({
createAccessor: (function () {
var re = /[\[\.]/;
return function (expr) {
if (Ext.isEmpty(expr)) {
return Ext.emptyFn;
}
if (Ext.isFunction(expr)) {
return expr;
}
if (this.useSimpleAccessors !== true) {
var i = String(expr).search(re);
if (i >= 0) {
return Ext.functionFactory('obj',
'return obj' + (i > 0 ?
'.hasOwnProperty("' +
expr.substring(0, i) +
'")? obj.' + expr + ' : null' :
expr));
}
}
return function (obj) {
return obj[expr];
};
};
}())
});
Disclaimer:
Please add this code to a fixes.js file in your project, to override Ext.Js' function. Please notice that:
The fix for 4.2.1 may be the same or similar
Once you override their function, when you upgrade Ext.Js, make sure they haven't change it, otherwise you'll loose updates on their code.
I have a rest service for which I am sending the Json data as ["1","2","3"](list of strings) which is working fine in firefox rest client plugin, but while sending the data in application the structure is {"0":"1","1":"2","2":"3"} format, and I am not able to pass the data, how to convert the {"0":"1","1":"2","2":"3"} to ["1","2","3"] so that I can send the data through application, any help would be greatly appreciated.
If the format of the json is { "index" : "value" }, is what I'm seeing in {"0":"1","1":"2","2":"3"}, then we can take advantage of that information and you can do this:
var myObj = {"0":"1","1":"2","2":"3"};
var convertToList = function(object) {
var i = 0;
var list = [];
while(object.hasOwnProperty(i)) { // check if value exists for index i
list.push(object[i]); // add value into list
i++; // increment index
}
return list;
};
var result = convertToList(myObj); // result: ["1", "2", "3"]
See fiddle: http://jsfiddle.net/amyamy86/NzudC/
Use a fake index to "iterate" through the list. Keep in mind that this won't work if there is a break in the indices, can't be this: {"0":"1","2":"3"}
You need to parse out the json back into a javascript object. There are parsing tools in the later iterations of dojo as one of the other contributors already pointed out, however most browsers support JSON.parse(), which is defined in ECMA-262 5th Edition (the specification that JS is based on). Its usage is:
var str = your_incoming_json_string,
// here is the line ...
obj = JSON.parse(string);
// DEBUG: pump it out to console to see what it looks like
a.forEach(function(entry) {
console.log(entry);
});
For the browsers that don't support JSON.parse() you can implement it using json2.js, but since you are actually using dojo, then dojo.fromJson() is your way to go. Dojo takes care of browser independence for you.
var str = your_incoming_json_string,
// here is the line ...
obj = dojo.fromJson(str);
// DEBUG: pump it out to console to see what it looks like
a.forEach(function(entry) {
console.log(entry);
});
If you're using an AMD version of Dojo then you will need to go back to the Dojo documentation and look at dojo/_base/json examples on the dojo.fromJson page.
I have this code as a cffunction that works fine:
<cfcomponent extends="core.core">
<cffunction name="loadService" access="remote" returnformat="JSON">
<cfscript>
objResponse = '{"CONFIG":[["internal"],[ "success"]],"DATA":[["Message1"]]}';
</cfscript>
<cfreturn objResponse>
</cffunction>
</cfcomponent>
I am trying to convert it to a full cfscript function like this:
component extends="core.core"{
remote JSON function loadService(){
objResponse = '{"CONFIG":[["internal"],[ "success"]],"DATA":[["Message1"]]}';
SerializeJSON(objResponse);
return objResponse;
}
}
The first way returns JSON fine and I can process it with jQuery. The second one throws and error "The value returned from the loadService function is not of type JSON."
I have tried it with and without SerializeJSON and both ways throw that error. I have also tried it without specifying JSON in the function syntax. That does not throw an error but it does wrap wddxpacket info around it. This is what it looks like when I don't specify JSON:
<wddxPacket version='1.0'><header/><data><string>{"CONFIG":[["internal"],[ "success"]],"DATA":[["Message1"]]}</string></data></wddxPacket>
I am stuck on this. Any help would be great. Thanks!
The correct CFScript syntax in CF9 is:
remote any function loadService() returnformat="JSON" {
Technically, "JSON" is not a valid returntype from a function (see here for all returntypes), but when you write:
remote JSON function
...you're basically saying that.
Notice in your tag-based cffunction call, you do not specify a returnType...so guess what it is by default? (hint: any).
It's easy to mix returnType and returnFormat up. A simple adjustment above and you should be good to go.
Complete Code
component extends="core.core" {
remote any function loadService() returnFormat="JSON" {
objResponse = '{"CONFIG":[["internal"],[ "success"]],"DATA":[["Message1"]]}';
SerializeJSON(objResponse);
return objResponse;
}
}
Also, I noticed that you have
SerializeJSON(objResponse);
in your function. This line has no effect on your function's return. So, it can easily be ommitted as your objResponse value is already in a JSON string. But, if the value of objResponse was something like
objResponse = {
"CONFIG" = [["internal"], ["success"]],
"DATA" = [["Message1"]]
};
then you could have done something like
return serializeJSON(objResponse);
which would have turn the complex data you had into a JSON string.
Here's the complete function
remote any function loadService()
returnFormat="JSON"
{
objResponse = {
"CONFIG" = [["internal"], ["success"]],
"DATA" = [["Message1"]]
};
return serializeJSON(objResponse);
}
Another way to specify the 'returnFormat' would be to use annotations:
component extends="core.core" {
/**
* #hint loads properties of an object and returns them in as JSON
* #output false
* #returnFormat JSON
*/
remote struct function loadService() {
objResponse = {
CONFIG = [["internal"],[ "success"]],
DATA = [["Message1"]]
};
return objResponse;
}
}