I'm trying to fetch one value from the data source website Quandlto be used within a MetaTrader4 script. The data source site provides a method to export data via API formats including .csv, .json or .xml. I have chosen the .csv format, which the data source website then provides an API call for me to use in the following format:
https://www.quandl.com/api/v3/datasets/ADB/LAB_UNEMP_JPN.csv?rows=1&api_key=my_api_key
By using the rows=1parameter in the above API call, I can choose to just export one value (which is the latest value).
Q1. Can I fetch the value straight from Quandl or do I have to save the dataset as a .csv file?
Because Quandl provides the API call (as seen above), would I be correct in assuming I can just fetch the value from their website and won't have to save the dataset to my computer as a .csvfile, which I would then have to fetch the latest value from? I would much prefer to fetch the value straight from Quandl without saving any files.
Q2. How can I fetch the value to be used within my MT4 script?
I have unsuccessfully tried a method using FileOpen() to access the data on the site, and have then tried to print it so that I can compare the value to others. Is FileOpen() only for .csv files only saved to my computer? I'd like to be able to print the value within my script once retrieved so that I can use it. Here is what I have so far:
int start() {
while (!IsStopped()) {
Sleep(2000);
int handle;
int value;
handle=FileOpen("https://www.quandl.com/api/v3/datasets/ADB/LAB_UNEMP_JPN.csv?rows=1&api_key=my_api_key", FILE_CSV, ';');
if(handle>0)
{
value=FileReadNumber(handle);
Print(handle);
FileClose(handle);
}
}
If anyone could aid me in my pursuit to fetch this value and print it within my script, it would be a huge help.
A1: No, you need not use a proxy-file for this API
If one tries the API call, using a published Quandl syntax of: <pragma>://<URL.ip>/<relative.URL>[?<par.i>=<val.i>[&<par.j>=<val.j>[&...]]]
the server side will push you the content of:
Date,Value
2013-12-31,4.0
So, your code may use Quandl API with like this:
void OnStart()
{
string cookie = NULL,
headers;
char post[],
result[];
int res;
/* TODO: *
* Must allow MT4 to access the server URL, *
* you should add URL "https://www.quandl.com/api/v3/datasets/ADB/LAB_UNEMP_JPN.csv" *
* in the list of allowed URLs *
* ( MT4 -> Tools -> Options -> [Tab]: "Expert Advisors" ): */
string aDataSOURCE_URL = "https://www.quandl.com/api/v3/datasets/ADB/LAB_UNEMP_JPN.csv";
string aDataSOURCE_API = "rows = 1&api_key=<My_API_Key>";
//-- Create the body of the POST request for API specifications and API-authorization
ArrayResize( post,
StringToCharArray( aDataSOURCE_API, // string text |--> [in] String to copy.
post, // uchar &array[] <--| [out] Array of uchar type.
0, // int start = 0 |--> [in] Position from which copying starts. Default - 0.
WHOLE_ARRAY, // int count = -1 |--> [in] Number of array elements to copy. Defines length of a resulting string. Default value is -1, which means copying up to the array end, or till terminating '\0'. Terminating zero will also be copied to the recipient array, in this case the size of a dynamic array can be increased if necessary to the size of the string. If the size of the dynamic array exceeds the length of the string, the size of the array will not be reduced.
CP_UTF8 // uint cp = CP_ACP |--> [in] The value of the code page. For the most-used code pages provide appropriate constants.
)
- 1
);
//-- Reset the last error code
ResetLastError();
//-- Loading a html page from Quandl
int timeout = 5000; //-- Timeout below 1000 (1 sec.) is not enough for slow Internet connection
res = WebRequest( "POST", // const string method |--> [in] HTTP method.
aDataSOURCE_URL, // const string URL |--> [in] URL.
cookie, // const string cookie |--> [in] Cookie value.
NULL, // const string referrer |--> [in] Value of the Referer header of the HTTP request.
timeout, // int timeout |--> [in] Timeout in milliseconds.
post, // const char &data |--> [in] Data array of the HTTP message body
ArraySize( post ), // int data_size |--> [in] Size of the data[] array.
result, // char &result <--| [out] An array containing server response data.
headers // string &result_headers <--| [out] Server response headers.
);
//-- Check errors
if ( res == -1 )
{ Print( "WebRequest Error. Error code = ", GetLastError() ); //-- Perhaps the URL is not listed, display a message about the necessity to add the address
MessageBox( "Add the address '" + aDataSOURCE_URL + "' in the list of allowed URLs on tab 'Expert Advisors'", "Error", MB_ICONINFORMATION );
}
else //-- Load was successful
{
PrintFormat( "The data has been successfully loaded, size = %d bytes.", ArraySize( result ) );
//-- parse the content ---------------------------------------
/*
"Date,Value
2013-12-31,4.0"
*/
//-- consume the content -------------------------------------
//...
}
}
There are 4 principal items to take care of:
0: an MT4 permission to use a given URL
1: an API URL setup - <pragma>://<URL.ip>/<relative.URL>
2: an API const char &data[] assy. [?<par.i>=<val.i>[&<par.j>=<val.j>[&...]]]
3: an API int data_size length calculation
Addendum: This is more a list of reasons, why avoiding use of the New-MQL4.56789 WebRequest() function variants:
Whereas MQL4 documentation promises a simple use of WebRequest() function variants, (cit.:) "1. Sending simple requests of type "key=value" using the header Content-Type: application/x-www-form-urlencoded.", the reality is far from a promised simple use-case:
0: DONE: an MT4 administrative step ( weakness: cannot enforce MT4 to communicate { http | https } protocol(s) over other than their default port(s) ~ { :80 | :443 }
1: URL consists of two ( three, if using a :port specifier, which does not work in MT4 (ref. 0: right above ) ) parts. <URL.ip_address> is the first one and can be expressed in a canonical IPv4 form ( 10.38.221.136 ) or in a DNS-translateable form ( MT4_APAC_PRIMARY.broker.com ). The second part, the <relative.URL>, specifies the HttpServer itself, where to locate a file ( it is a HttpServer--relative file location ). Published WebRequest permit to use the both parts joined together, ref. aDataSOURCE_URL.
3: WebServer API, if constructed so, may permit to add some additional parameters, that can be specified and presented to the WebServer. The presentation depends whether the { HTTP GET | HTTP POST } protocol-option is selected in on a caller side.
4: each call to MT4 WebRequest() also requires the caller to specify a length of a data content parameter ( ref. the use of ArraySize( post ), // int data_size )
Related
I'm setting up a repo to be used in projects where CID (related) data needs to be transacted on-chain and where I'm following a work-flow of:
1.) Establishing the CID data;
2.) Transacting said data;
3.) Publishing/Importing data in to IPFS after a successful transaction.
The main purpose of the repo is to reliably determine CIDs without importing data into IPFS(step 1). The workflow is aimed at avoiding the risk of front-running, based on data becoming publicly available before the transaction sub 2 is completed (or even initiated). My thoughts on this perceived risk, basically run down to this:
The purpose of the on-chain transaction is not only to uniquely identify content (say NFT material), but also to establish/determine its provenance, or create some form of connection with its creator/(original) owner. Cleary the public availability of such content, detracts from the core purpose of such a transaction, since it allows for - let's call it - NFT front running; one could monitor the network/specific nodes generally involved in publishing NFT(like material)to the IPFS network, for announcements on data that has been added, and claim the mentioned connection for oneself.
Question: Is this "problem" as practical as I am perceiving it? I can only find very limited information on mitigating this risk, although 1.) the practice of creating gas-less NFT's is increasingly popular and 2.) most of the data will likely enter the IPFS through pinning services (with presumably short announcement intervals) are increasingly popular (instead of via self-managed nodes where one could programatically decouple adding(pre-transaction) and pinning/announcement(post-transaction).
Issue(the main reason of this entry): I'm having difficulties establishing the CID for content exceeding the block-size.
Creating a DAG Node and adding children to it:
# parts of the test file cid-from-scratch.js
describe("Create DAG-PB root from scratch", function() {
let dagNode;
it("Creates root DAG NODE", async function() {
this.timeout(6000);
dagNode = await sliceAddLink(buffer, new DAGNode())
// Comparing the length Links property of the created root node, with that retrieved from local node
// using the same content
assert.equal(dagNode.Links.length, localDag.value.Links.length,
"Expected same amount of children in created, as in retrieved DAG ")
for (i = 0; i < dagNode.Links.length; i++) {
console.log("Children Strings: ", dagNode.Links[i].Hash.toString())
// Comparing the strings of the children
assert.equal(dagNode.Links[i].Hash.toString(), localDag.value.Links[i].Hash.toString(), "Children's CID should be same")
}
console.log(dagNode)
})
})
/**
*
* #param {Buffer} buffer2Slice The full content Buffer
* #param {Object} dagNode new DAGNode()
*/
function sliceAddLink(buffer2Slice, dagNode) {
return new Promise(async function(resolve, reject) {
try {
while (buffer2Slice.length > 0) {
let slice = buffer2Slice.slice(0, 262144);
buffer2Slice = buffer2Slice.slice(262144);
sliceAddResult = await createCid(slice, ...[, , ], 85);
let link = { Tsize: slice.length, Hash: sliceAddResult };
dagNode.addLink(link);
}
resolve(dagNode)
} catch (err) { reject(err) }
})
}
/**
*
* #param {Buffer} content
* #param {string} [hashAlg] // optional
* #param {number} [cidVersion] // optional
* #param {number} [cidCode] // optional - should be set to 85 in creating DAG-LINK
*/
async function createCid(content, hashAlg, cidVersion, cidCode) {
hashAlg = hashAlg || cidOptions.hashAlg;
cidVersion = cidVersion || cidOptions.cidVersion
cidCode = cidCode || cidOptions.code
let fileHash = await multiHashing(content, hashAlg)
return new CID(cidVersion, cidCode, fileHash)
}
The Links property of the created DAG matches that of the DAG retrieved from the local ipfs node and that from Infura (of course based on same content). The problem is that, unlike the retrieved DAG Nodes, the data field of the created DAGNode is empty(and therefore yielding a diff CID):
DAGNode retrieved: Data property containing data
DAGNode created: Data property is Empty
I'm adding to IPFS like so:
/**
* #note ipfs.add with the preset options
* #param {*} content Buffer (of File) to be published
* #param {*} ipfs The ipfs instance involved
*/
async function assetToIPFS(content, ipfs) {
let result = await ipfs.add(content, cidOptions)
return result;
}
// Using the following ADD options
const cidOptions = {
cidVersion: 1, // ipfs.add default = 0
hashAlg: 'sha2-256',
code: 112
}
it("Adding publicly yields the same CID result", async function() {
// longer then standard time out because of interaction with public IPFS gateway
this.timeout(6000);
_ipfsInfura = await ipfsInfura;
let addResult = await assetToIPFS(buffer, _ipfsInfura)
assert.equal(localAddResult.cid.toString(), addResult.cid.toString(), "Different CIDS! (expected same)")
assert.equal(localAddResult.size, addResult.size, "Expected same size!")
})
and subsequently getting the DAG (from both Local Node and Infura Node) like so(, to compare them with the created DAG):
// Differences in DAG-PB object representation in Infura and Local node!
it("dag_get local and dag_get infura yield the same Data and Links array", async function() {
this.timeout(6000);
let cid = localAddResult.cid
localDag = await dagGet(cid, _ipfsLocal);
let infuraDag = await dagGet(cid, _ipfsInfura);
console.log("Local DAG: ", localDag.value);
console.log("Infura DAG: ", infuraDag.value);
// Differences in DAG-PB object representation in Infura and Local node
// Data is Buffer in localDag and uint8array in infuraDag
assert.equalBytes(await dagData(localDag.value), await dagData(infuraDag.value), 'Expected Equal Data')
assert.equal(infuraDag.value.Links.length, localDag.value.Links.length, "Expected same amount of children")
assert(localDag.value.Links.length > 0, "Should have children (DAG-LINK objects)")
for (i = 0; i < localDag.value.Links.length; i++) {
assert.equal(infuraDag.value.Links[i].Hash.toString(), localDag.value.Links[i].Hash.toString())
}
})
IPFS Docs on the issue of work with blocks state that "A given file's 'hash' is actually the hash of the root (uppermost) node in the DAG."
You could suspect that the DAG node's Data field plays a role in this. On the other hand, looking at the length of "Data", the js-ipfs examples on dag.put ('Some data') and the IPLD DAG-PB specifications ("Data may be omitted or a byte array with a length of zero or more"), this property seems more arbitrary. (The Data array/buffer from both the infura and local ipfs node have the same content)
How can I create a root DAG node (using a content buffer and CID options), with not only the same Links property, but also the same Data property as the DAG root I'm getting after adding the same content buffer to an ipfs instance?
I am trying to retrieve POST data from html form using program written in C.
At the moment I am using:
char *formdata = getenv("QUERY_STRING");
if(formdata == NULL) /* no data retrieved */
This seems to be working fine with form "GET" method but not with "POST" method. How do I retrieve POST data?
POST data is appended to the request header, after a double newline. In a CGI-BIN environment, you read it from STDIN.
Be warned that the server IS NOT REQUIRED to send you an EOF character (or some termination indicator) at the end of the POST data. Never read more than CONTENT_LENGTH bytes.
If I remember right, read stdin for POST data.
Edit for untested snippet
len_ = getenv("CONTENT_LENGTH");
len = strtol(len_, NULL, 10);
postdata = malloc(len + 1);
if (!postdata) { /* handle error or */ exit(EXIT_FAILURE); }
fgets(postdata, len + 1, stdin);
/* work with postdata */
free(postdata);
Why reinvent that wheel? Just use a library: http://libcgi.sourceforge.net/
What would be a good way to work with Chrome's incoming 1MB limit for native messaging extensions? The data that we would be sending to the extension is json-serialized gpx, if that matters.
When the original message is >1MB, it seems like this question really has two parts:
how to partition the data on the sending end (i.e. the client)
this part should be pretty trivial. Even if we need to split into separate self-contained complete gpx strings, that is pretty straightforward.
how to join the <1MB messages back in to the original >1MB
is there a standard known solution for this question? We can call background.js (ie. the function passed to chrome.runtime.onMessageExternal.addListener) once for each <1MB incoming message, but, how would we combine the strings from those repeated calls in to one response for the extension?
UPDATE 8-18-16:
what we've been doing is just appending each message 'chunk' on a buffer variable in background.js, and not send it back to Chrome until disconnection:
var gpxText="";
port.onMessage.addListener(function(msg) {
// msg must be a JSON-serialized simple string;
// append each incoming msg to the collective gpxText string
// but do not send it to Chrome until disconnection
// console.log("received " + msg);
gpxText+=msg;
});
port.onDisconnect.addListener(function(msg) {
if (gpxText!="") {
sendResponse(JSON.parse(gpxText));
gpxText="";
} else {
sendResponse({status: 'error', message: 'something really bad happened'});
}
// build the response object here with msg, status, error tokens, and always send it
console.log("disconnected");
});
We will have to make that appending a bit smarter to handle and send both status and message keys/values, but that should be easy.
I have this same issue and have been scouring the web for the past couple days to figure out something to do. In my application, I am currently shipping a JSON string over to the background script in chunks, having to create a subprotocol to handle this special case. e.g. my initial question might look like:
{action:"getImage",guid:"123"}
and the response for <1MB might look like:
{action:"getImage",guid:"123",status:"success",return:"ABBA..."}
where ABBA... represents a base64 encoding of the bytes. when >1MB, however, the response will look like:
{action:"getStream",guid:"123",status:"success",return:"{action:\"getImage\",guid:\"123\",return:\"ABBA...",more:true}
and upon receipt of the payload with method==='stream', the background page will immediately issue a new request like:
{action:"getStream",guid:"123"}
and the next response might look like:
{action:"getStream",guid:"123",status:"success",return:"...DEAF==",more:false}
so your onMessage handler would look something like:
var streams;
function onMessage( e ) {
var guid = e.guid;
if ( e.action === 'getStream' ) {
if ( !streams[ guid ] ) streams[ guid ] = '';
streams[ guid ] += e[ 'return' ];
if ( e.more ) {
postMessage( { action: 'getStream', guid: guid } );
// no more processing to do here, bail
return;
}
e = JSON.parse( streams[ guid ] );
streams[ guid ] = null;
}
// do something with e as if it was never chunked
...
}
it works, but I am somewhat convinced that it is slower than it should be (though this could be due to the slow feeling of the STDIO signaling and, in my particular app, additional signaling that has to happen for each new chunk).
Ideally I'd like to stream the file in a more efficient protocol supported natively by Chrome. I looked into WebRTC, but it would mean that I'd need to implement the API into my native messaging host (as best I can tell), which is not an option I'm willing to take on. I played with 'passing' the contents by file as such:
if ( e.action = 'getFile' ) {
xhr = new XMLHttpRequest();
xhr.onreadystatechange = function( e ) {
if ( e.target.readyState === 4 ) {
onMessage( e.target.responseText );
}
};
xhr.open( 'GET', chrome.extension.getURL( '/' + e.file ), true );
xhr.send();
return;
}
where I have my native message host write a .json file out to extension's install directory and it seems to work, but there is no way for me to reliably derive the path (without fudging things and hoping for the best), because as best I can the location of the extensions install path is determined by your Chrome user profile and there's no API I could find to give me that path. Additionally, there's a 'version' folder created under your extension id which includes an _0 that I don't know how to calculate (is the _0 constant for some future use? does it tick up when an extension is published anew to the web store, but the version is not adjusted?).
At this point I'm out of ideas and I'm hoping someone will stumble across this question with some guidance.
My web-application should be able to store and update (also load) JSON data on a Server.
However, the data may contain some big arrays where every time they are saved only a new entry was appended.
My solution:
send updates to the server with a key-path within the json data.
Currently I'm sending the data with an xmlhttprequest by jquery, like this
/**
* Asynchronously writes a file on the server (via PHP-script).
* #param {String} file complete filename (path/to/file.ext)
* #param content content that should be written. may be a js object.
* #param {Array} updatePath (optional), json only. not the entire file is written,
* but the given path within the object is updated. by default the path is supposed to contain an array and the
* content is appended to it.
* #param {String} key (optional) in combination with updatePath. if a key is provided, then the content is written
* to a field named as this parameters content at the data located at the updatePath from the old content.
*
* #returns {Promise}
*/
io.write = function (file, content, updatePath, key) {
if (utils.isObject(content)) content = JSON.stringify(content, null, "\t");
file = io.parsePath(file);
var data = {f: file, t: content};
if (typeof updatePath !== "undefined") {
if (Array.isArray(updatePath)) updatePath = updatePath.join('.');
data.a = updatePath;
if (typeof key !== "undefined") data.k = key;
}
return new Promise(function (resolve, reject) {
$.ajax({
type: 'POST',
url: io.url.write,
data: data,
success: function (data) {
data = data.split("\n");
if (data[0] == "ok") resolve(data[1]);
else reject(new Error((data[0] == "error" ? "PHP error:\n" : "") + data.slice(1).join("\n")));
},
cache: false,
error: function (j, t, e) {
reject(e);
//throw new Error("Error writing file '" + file + "'\n" + JSON.stringify(j) + " " + e);
}
});
});
};
On the Server, a php script manages the rest like this:
recieves the data and checks if its valid
check if the given file path is writable
if the file exists and is .json
read it and decode the json
return an error on invalid json
if there is no update path given
just write the data
if there is an update path given
return an error if the update path in the JSON data can't be traversed (or file didn't exist)
update the data at update-path
write the pretty-printed json to file
However I'm not perfectly happy and problems kept coming for the last weeks.
My Questions
Generally: How would you approach this problem? alternative suggestions, databases? any libraries that could help?
Note: I would prefer solutions, that just use php or some standart apache stuff.
One problem was, that sometimes, multiple writes on the same file were triggered. To avoid this I used the Promises (wrapped it because I read jquerys deferred stuff isnt Promise/A compliant) client side, but I dont feel 100% sure it is working. Is there a (file) lock in php that works across multiple requests?
Every now and then the JSON files break and its not clear to me how to reproduce the problem. At the time it breaks, I don't have a history of what happened. Any general debugging strategies with a client/server saving/loading process like this?
I wrote a comet enable web server that does diffs on updates of json data structures. For the exactly same reason. The server keeps a few version of a json document and serves client with different version of the json document with the update they need to get to the most reason version of the json data.
Maybe you could reuse some of my code, written in C++ and CoffeeScript: https://github.com/TorstenRobitzki/Sioux
If you have concurrent write accesses to your data structure, are your sure, that who ever writes to the file has the right version of the file in mind when reading the file?
I would like to know what can I do to upload attachments in CouchDB using the update function.
here you will find an example of my update function to add documents:
function(doc, req){
if (!doc) {
if (!req.form._id) {
req.form._id = req.uuid;
}
req.form['|edited_by'] = req.userCtx.name
req.form['|edited_on'] = new Date();
return [req.form, JSON.stringify(req.form)];
}
else {
return [null, "Use POST to add a document."]
}
}
example for remove documents:
function(doc, req){
if (doc) {
for (var i in req.form) {
doc[i] = req.form[i];
}
doc['|edited_by'] = req.userCtx.name
doc['|edited_on'] = new Date();
doc._deleted = true;
return [doc, JSON.stringify(doc)];
}
else {
return [null, "Document does not exist."]
}
}
thanks for your help,
It is possible to add attachments to a document using an update function by modifying the document's _attachments property. Here's an example of an update function which will add an attachment to an existing document:
function (doc, req) {
// skipping the create document case for simplicity
if (!doc) {
return [null, "update only"];
}
// ensure that the required form parameters are present
if (!req.form || !req.form.name || !req.form.data) {
return [null, "missing required post fields"];
}
// if there isn't an _attachments property on the doc already, create one
if (!doc._attachments) {
doc._attachments = {};
}
// create the attachment using the form data POSTed by the client
doc._attachments[req.form.name] = {
content_type: req.form.content_type || 'application/octet-stream',
data: req.form.data
};
return [doc, "saved attachment"];
}
For each attachment, you need a name, a content type, and body data encoded as base64. The example function above requires that the client sends an HTTP POST in application/x-www-form-urlencoded format with at least two parameters: name and data (a content_type parameter will be used if provided):
name=logo.png&content_type=image/png&data=iVBORw0KGgoA...
To test the update function:
Find a small image and base64 encode it:
$ base64 logo.png | sed 's/+/%2b/g' > post.txt
The sed script encodes + characters so they don't get converted to spaces.
Edit post.txt and add name=logo.png&content_type=image/png&data= to the top of the document.
Create a new document in CouchDB using Futon.
Use curl to call the update function with the post.txt file as the body, substituting in the ID of the document you just created.
curl -X POST -d #post.txt http://127.0.0.1:5984/mydb/_design/myddoc/_update/upload/193ecff8618678f96d83770cea002910
This was tested on CouchDB 1.6.1 running on OSX.
Update: #janl was kind enough to provide some details on why this answer can lead to performance and scaling issues. Uploading attachments via an upload handler has two main problems:
The upload handlers are written in JavaScript, so the CouchDB server may have to fork() a couchjs process to handle the upload. Even if a couchjs process is already running, the server has to stream the entire HTTP request to the external process over stdin. For large attachments, the transfer of the request can take significant time and system resources. For each concurrent request to an update function like this, CouchDB will have to fork a new couchjs process. Since the process runtime will be rather long because of what is explained next, you can easily run out of RAM, CPU or the ability to handle more concurrent requests.
After the _attachments property is populated by the upload handler and streamed back to the CouchDB server (!), the server must parse the response JSON, decode the base64-encoded attachment body, and write the binary body to disk. The standard method of adding an attachment to a document -- PUT /db/docid/attachmentname -- streams the binary request body directly to disk and does not require the two processing steps.
The function above will work, but there are non-trivial issues to consider before using it in a highly-scalable system.