i'm using this code that i found here! the problem is i need to add multiple minimum minimum_order_amount for other postcodes! Help!
add_action( 'woocommerce_checkout_process', 'wc_minimum_order_amount' );
add_action( 'woocommerce_before_cart' , 'wc_minimum_order_amount' );
function wc_minimum_order_amount() {
$minimum = 20; // Set the minimum order value
$postcodes = array('26224', '26222', '26442', '26443'); // Define your targeted postcodes in the array
$postcode = ''; // Initializing
if ( isset($_POST['shipping_postcode']) && ! empty($_POST['shipping_postcode']) ) {
$postcode = $_POST['shipping_postcode'];
if ( empty($postcode) && isset($_POST['billing_postcode']) && ! empty($_POST['billing_postcode']) ) {
$postcode = $_POST['billing_postcode'];
}
} elseif ( $postcode = WC()->customer->get_shipping_postcode() ) {
if ( empty($postcode) ) {
$postcode = WC()->customer->get_billing_postcode();
}
}
if ( WC()->cart->total < $minimum && ! empty($postcode) && in_array( $postcode, $postcodes ) ) {
$error_notice = sprintf( 'Η παραγγελία σου είναι %s - Ελάχιστη Παραγγελία %s ' ,
wc_price( WC()->cart->total ),
wc_price( $minimum )
);
if( is_cart() ) {
wc_print_notice( $error_notice, 'error' );
} else {
wc_add_notice( $error_notice, 'error' );
}
}
}
I would like to sort my values in the JSON file order by "createdAt" and use these values in the plot function. As you can see this column stores date value so I've converted it. And I've applied the sort function but when I see the output of the data, it seems sort does not apply.
data = loadjson('C:/data/default.json');
count_data = sum(cellfun(#(x) numel(x),data.Location)); %returns 21
for i=1:count_data
createdAt= cellfun( #(cellElem) cellElem.createdAt, data.Location ,'UniformOutput',false);
createdAtDate= datetime(createdAt(i),'InputFormat','dd-MM-yyyy HH:mm:ss','Format', 'dd-MM-yyyy n HH:mm:ss');
[~,X] = sort(createdAtDate,'descend');
out=data(X);
end
for i=1:count_data
x = cellfun( #(cellElem) cellElem.createdAt, out.Location,'UniformOutput',false);
disp(x);
end
My JSON file:
"Location": [
{
"id": "0b5965e5-c509-4522-a525-8ef5a49dadaf",
"measureId": "5a6e9b79-dbb1-4482-acc1-d538f68ef01f",
"locationX": 0.9039769252518151,
"locationY": 0.2640594070404616,
"createdAt": "06-01-2021 19:38:44"
},
{
"id": "18714a2f-a8b3-4dc6-8a5b-114497fa9671",
"measureId": "671f52bc-a066-494a-9dce-6e9ccfac6c1d",
"locationX": 1.5592001730078755,
"locationY": 0.5207689756815629,
"createdAt": "06-01-2021 19:35:24"
},
Thanks in advance.
You need to extract all of the data you need, then sort
i.e.
x = cellfun( #(cellElem) cellElem.locationX, data.Location );
y = cellfun( #(cellElem) cellElem.locationY, data.Location );
% Get date strings
d = cellfun( #(cellElem) cellElem.createdAt, data.Location, 'UniformOutput', false)
% Convert to datetime
d = datetime( d, 'InputFormat', 'dd-MM-yyyy HH:mm:ss' );
% Get the sort order
[~,idx] = sort( d );
% Sort other arrays
x = x(idx);
y = y(idx);
Another option would be to use tables
x = cellfun( #(cellElem) cellElem.locationX, data.Location );
y = cellfun( #(cellElem) cellElem.locationY, data.Location );
% Get dates
d = cellfun( #(cellElem) cellElem.createdAt, data.Location, 'UniformOutput', false)
d = datetime( d, 'InputFormat', 'dd-MM-yyyy HH:mm:ss' );
% Create table
t = table( x(:), y(:), d(:), 'VariableNames', {'locationX','locationY','createdAt'} );
% Sortrows
t = sortrows( t, 'createdAt' );
You have to use a table rather than a matrix here (although sortrows can accept either) because of the mixed data types across the columns.
I just finished building my algorithm but now I need to export data from the MetaTrader terminal every minute to a CSV file that my algorithm can read and run predictions on.
There are multiple ways online to export MetaTrader data to a CSV file in real-time but I can't find anything that will let me export even just the open price of the new candle as soon as it forms.
I'd like to export the last 10 OHLC candles on a minute timeframe and the open price of the current 11th candle. The open price of the current candle that's STILL forming and hasn't closed yet. I just need the open price for this as soon as the candle starts.
Any ideas? I'm stuck here
UPDATE
I added the code.
This current code is a MetaTrader script, that fetches the past 10 OHLCV candles and the 11th candle as I mentioned.
However, I have three problems with this script:
It does not allow me to overwrite the existing csv.
It does not run realtime and update constantly.
The 11th candle is not the latest (the candle still in formation).
Any help?
//+------------------------------------------------------------------+
#include <stdlib.mqh>
#include <stderror.mqh>
//+------------------------------------------------------------------+
//| Input Parameters Definition |
//+------------------------------------------------------------------+
extern int BarCount = 11;
extern string Pairs = "EURUSD";
extern string delimiter = ",";
//+------------------------------------------------------------------+
//| Local Parameters Definition |
//+------------------------------------------------------------------+
datetime lastExport[];
string pairs[];
//+------------------------------------------------------------------+
//| Custom indicator initialization function |
//+------------------------------------------------------------------+
int init()
{
//------------------------------------------------------------------
Split(Pairs, pairs, ",");
//------------------------------------------------------------------
if (ArraySize(pairs) == 0 || StringTrimLeft(StringTrimRight(pairs[0])) == "")
{
Alert("Pairs are not entered correctly please check it...");
return (0);
}
//------------------------------------------------------------------
ArrayResize(lastExport, ArraySize(pairs));
ArrayInitialize(lastExport, 0);
//------------------------------------------------------------------
Comment("quote exporter is active :)");
//------------------------------------------------------------------
return(0);
}
//+------------------------------------------------------------------+
//| Custom indicator deinitialization function |
//+------------------------------------------------------------------+
int deinit()
{
//------------------------------------------------------------------
Comment("");
//------------------------------------------------------------------
return(0);
}
//+------------------------------------------------------------------+
//| Custom indicator iteration function |
//+------------------------------------------------------------------+
int start()
{
//------------------------------------------------------------------
if (ArraySize(pairs) == 0 || StringTrimLeft(StringTrimRight(pairs[0])) == "") return (0);
//------------------------------------------------------------------
BarCount = MathMin(Bars, BarCount);
//------------------------------------------------------------------
for (int j = 0; j < ArraySize(pairs); j++)
{
if (lastExport[j] == Time[0]) continue;
lastExport[j] = Time[0];
if (StringTrimLeft(StringTrimRight(pairs[j])) == "") continue;
if (MarketInfo(pairs[j], MODE_BID) == 0) { Alert("symbol " + pairs[j] + " is not loaded"); continue; }
//------------------------------------------------------------------
string file = pairs[j] + "_" + GetTimeFrameName(0) + ".csv";
int log = FileOpen(file, FILE_CSV|FILE_WRITE, "~");
if (log < 0) { Alert("can not create/overwrite csv file " + file + "!"); continue; }
string buffer;
buffer = "Date"+delimiter+"Time"+delimiter+"Open"+delimiter+"High"+delimiter+"Low"+delimiter+"Close"+delimiter+"Volume";
FileWrite(log, buffer);
int digits = MarketInfo(pairs[j], MODE_DIGITS);
for (int i = BarCount; i >= 1; i--)
{
buffer = TimeToStr(Time[i], TIME_DATE)+delimiter+TimeToStr(Time[i], TIME_MINUTES)+delimiter+DoubleToStr(iOpen(pairs[j], 0, i), digits)+delimiter+DoubleToStr(iHigh(pairs[j], 0, i), digits)+delimiter+DoubleToStr(iLow(pairs[j], 0, i), digits)+delimiter+DoubleToStr(iClose(pairs[j], 0, i), digits)+delimiter+DoubleToStr(iVolume(pairs[j], 0, i), 0);
FileWrite(log, buffer);
}
}
//------------------------------------------------------------------
return(0);
}
//+------------------------------------------------------------------+
string GetTimeFrameName(int TimeFrame)
{
switch (TimeFrame)
{
case PERIOD_M1: return("M1");
case PERIOD_M5: return("M5");
case PERIOD_M15: return("M15");
case PERIOD_M30: return("M30");
case PERIOD_H1: return("H1");
case PERIOD_H4: return("H4");
case PERIOD_D1: return("D1");
case PERIOD_W1: return("W1");
case PERIOD_MN1: return("MN1");
case 0: return(GetTimeFrameName(Period()));
}
}
//+------------------------------------------------------------------+
void Split(string buffer, string &splitted[], string separator)
{
string value = "";
int index = 0;
ArrayResize(splitted, 0);
if (StringSubstr(buffer, StringLen(buffer) - 1) != separator) buffer = buffer + separator;
for (int i = 0; i < StringLen(buffer); i++)
if (StringSubstr(buffer, i, 1) == separator)
{
ArrayResize(splitted, index + 1);
splitted[index] = value;
index ++;
value = "";
}
else
value = value + StringSubstr(buffer, i, 1);
}
//+------------------------------------------------------------------+
A long story to go. A quick-fix of a conceptually wrong idea in an inappropriate algorithmisation listed with remarks below.
Never, indeed NEVER, do anything like this as a CustomIndicator:
Why? After some internal re-designs MetaQuote, Inc., has conformed, that all, yes ALL CustomIndicator code-execution units, that operate inside the MT4-platform, SHARE one SINGLE solo THREAD.
Never attempt to place any kind of slow ( high-latency / low-throughput ) activity like fileIO into CustomIndicator. NEVER. 10~15 ms for fileIO are killing the Real-Time-ness as TLP-durations are well under 20 ms for Majors in prime-time ( ref. picture ).
( For more details, you may check other posts on algorithmic-trading ).
The less any BLOCKING one. Alert() being such example.
The best thing for RealTime?
Better use a Script-type of MQL4 code, not a CustomIndicator.
Use a process-to-process low-latency messaging like nanomsg or zeromq.
I started to use ZeroMQ wrapper for MQL4 many years ago for moving aDataSEGMENT[] into an external AI/ML-predictor engine and to return predictions back, having a net-turn-around-time well under 80 ms, incl. the AI/ML-prediction computing.
This code if less wrong:
//+------------------------------------------------------------------+
#include <stdlib.mqh>
#include <stderror.mqh>
//+------------------------------------------------------------------+
extern int BarCount = 11;
extern string Pairs = "EURUSD";
extern string delimiter = ",";
datetime lastExport[];
string fileNAME[]; // USE INSTEAD OF REPETITIVE RE-ASSIGNMENTS
string pairs[];
//+------------------------------------------------------------------+
int init() {
//--------------------------------------------------------------
Split( Pairs, pairs, "," ); // REF. BELOW FOR A PROPER APPROACH
//--------------------------------------------------------------
if ( 0 == ArraySize( pairs )
|| "" == StringTrimLeft( StringTrimRight( pairs[0] ) )
){ Alert( "WARN: Pairs are not entered correctly please check it..." ); // !BLOCKS!
return( 0 );
}
//--------------------------------------------------------------
ArrayResize( lastExport, ArraySize( pairs ) );
ArrayInitialize( lastExport, 0 );
ArrayResize( fileNAME, ArraySize( pairs ) ); // POPULATE
for ( int j = 0; j < ArraySize( pairs ); j++ ) fileNAME = StringFormat( "%s_M1.CSV", pairs[j] );
//--------------------------------------------------------------
Comment( "INF: Script started. A quote exporter is active :)" );
//--------------------------------------------------------------
return( 0 );
}
//+------------------------------------------------------------------+
int deinit() { // USE OnDeinit(){...} SYNTAX WITH REASON PARAMETER
//--------------------------------------------------------------
Comment( "INF: Script will terminate." );
//--------------------------------------------------------------
return( 0 );
}
//+------------------------------------------------------------------+
int start() {
//--------------------------------------------------------------
if ( 0 == ArraySize( pairs )
|| "" == StringTrimLeft( StringTrimRight( pairs[0] ) )
) return( 0 );
//--------------------------------------------------------------
for ( int j = 0;
j < MathMin( ArraySize( pairs ), CHART_MAX - 1 );
j++ ) //--------------------------------------------------iterateOverAllListedINSTRUMENTs:
ChartOpen( pairs[j], PERIOD_M1 ); // enforce MT4 DataPumps to pre-load & collect QUOTEs
//------------------------------------------------------------------
while ( True ) { //-------------------------------------------------iterateINFINITELY:
ulong loopSTART = GetMicrosecondCount();
for ( int j = 0; j < ArraySize( pairs ); j++ ) { //---------iterateOverAllListedINSTRUMENTs:
if ( "" == StringTrimLeft( StringTrimRight( pairs[j] ) ) ) continue; // .LOOP-NEXT INSTRUMENT
else RefreshRates(); // .REFRESH MT4 DataPumps
if ( 0 == MarketInfo( pairs[j], MODE_BID ) ) { Alert( "symbol " + pairs[j] + " is not loaded" ); continue; } // .LOOP-NEXT INSTRUMENT // !BLOCKS!
if ( lastExport[j] == iTime( pairs[j], PERIOD_CURRENT, 0 ) ) continue; // .LOOP-NEXT INSTRUMENT
else lastExport[j] = iTime( pairs[j], PERIOD_CURRENT, 0 );
int digits = MarketInfo( pairs[j], MODE_DIGITS );
//----------------------------------------------------
int logFH = FileOpen( fileNAME[j], FILE_CSV|FILE_WRITE, delimiter );
if ( logFH == INVALID_HANDLE ) { Alert( StringFormat( "INF: Can not create / overwrite csv file(%s)! Errno(%d)", fileNAME[j], GetLastError() ) ); continue; } // !BLOCKS!
//----------------------------------------------------fileIO RTO:
FileWrite( logFH, "Date", "Time", "Open", "High", "Low", "Close", "Volume" );
for ( int i = MathMin( BarCount, iBars( pairs[j], PERIOD_CURRENT ) ); i >= 1; i-- )
FileWrite( logFH, TimeToStr( iTime( pairs[j], PERIOD_CURRENT, i ), TIME_DATE ),
TimeToStr( iTime( pairs[j], PERIOD_CURRENT, i ), TIME_MINUTES ),
DoubleToStr( iOpen( pairs[j], PERIOD_CURRENT, i ), digits ),
DoubleToStr( iHigh( pairs[j], PERIOD_CURRENT, i ), digits ),
DoubleToStr( iLow( pairs[j], PERIOD_CURRENT, i ), digits ),
DoubleToStr( iClose( pairs[j], PERIOD_CURRENT, i ), digits ),
DoubleToStr( iVolume( pairs[j], PERIOD_CURRENT, i ), 0 )
);
//----------------------------------------------------fileIO DONE
FileClose( logFH );// # a cost ~ 15-18 [ms]
}
//----------------------------------------------------------loopIO DONE
Comment( StringFormat( "INF: Last loopIO-TAT took %d [us]. Will sleep past aNewBarEVENT( M1 )... ~ %s ( + %d [s] )",
GetMicrosecondCount()-loopSTART,
TimeToStr( TimeLocal(), TIME_MINUTES ),
( 60 - MathMod( TimeLocal(), 60 ) )
)
);
Sleep( 250 + 1000 * ( 60 - MathMod( TimeLocal(), 60 ) ) );
//----------------------------------------------------------loopWAIT past aNewBarEVENT
}
//------------------------------------------------------------------
return( 0 );
}
//+------------------------------------------------------------------+
string GetTimeFrameName( int TimeFrame ) {
switch( TimeFrame ) { case PERIOD_M1: return( "M1" );
case PERIOD_M5: return( "M5" );
case PERIOD_M15: return( "M15" );
case PERIOD_M30: return( "M30" );
case PERIOD_H1: return( "H1" );
case PERIOD_H4: return( "H4" );
case PERIOD_D1: return( "D1" );
case PERIOD_W1: return( "W1" );
case PERIOD_MN1: return( "MN1" );
case 0: return( GetTimeFrameName( Period() ) );
}
}
//+------------------------------------------------------------------+
void Split( string buffer, string &splitted[], string separator ) {
string value = "";
int index = 0;
ArrayResize( splitted, 0 );
if ( StringSubstr( buffer, StringLen( buffer ) - 1 ) != separator ) buffer = buffer + separator;
for ( int i = 0;
i < StringLen( buffer );
i++ )
if ( StringSubstr( buffer, i, 1 ) == separator // a case, when (string) buffer[i] is actually a next (string) separator
|| StringLen( buffer ) == i // a case, when (string) buffer does not end with a (string) separator
){ //----------------------------------------------// ONCE A MANUAL FIX ENABLED TO PARSE A SINGLE-INSTRUMENT (string) buffer:
ArrayResize( splitted, index + 1 );
splitted[index] = StringTrimLeft( StringTrimRight( value ) );
index++;
value = "";
}
else
value = value + StringSubstr( buffer, i, 1 );
/* **************************************************************** WORTH READING THE DOCUMENTATION:
USING A BUILT-IN FUNCTION WOULD BE MUCH SIMPLER AND SAFER:
>
> int StringSplit( const string string_value, // A string to search in
> const ushort separator, // A separator using which substrings will be searched
> string &result[] // An array passed by reference to get the found substrings
> );
int SplitINSTRUMENTs( string buffer, string &splitted[], string separator ){
return( StringSplit( buffer,
StringGetCharacter( separator ),
splitted
)
); // -------------------------------------------------WOULD LOVELY FIX THE WHOLE CIRCUS
}
*/
}
//+------------------------------------------------------------------+
Would like to use information into spreadsheets from a live Google Map (KML data). The code below handle it, but not as fast as required. It must handle up to 10 layers with about 2000 placemarks, line positions and/or polygon positions in the interesting (parameter "restrict") layer (or all layers). The size of the kml (xml) data is therefore often quite large.
Question 1: How (if possible) can caching and/or some other solution be implemented to handle about 2Mb xml data?
Question 2: How can what improvements be implemented to stream line the code, add flexibility to dynamically handle unknown (in advance) number of columns/rows and lower the execution time?
/**
* Fetch a Google Map and write the data to the spreadsheet (URL and layer).
*
* #param {url} input the URL associated with the Google Map (should contain &forcekml=1).
* #param {restrict} input the Layer name you want to restrict the data to.
* #return a range of data depending on the info in the map
* #customfunction
*/
function IMPORTMAP( url, restrict ) {
/* One key only allow 100k
var cache = CacheService.getPublicCache();
var cached = cache.get( "google-maps-xml" );
var cached = cache.getAll('?'); //Unknown keys in advance that also may change from time to time.
if ( cached != null ) {
return cached;
}*/
var txt = UrlFetchApp.fetch( url ).getContentText();
var sheet = SpreadsheetApp.getActiveSheet();
var width = sheet.getMaxColumns();
var height = sheet.getMaxRows();
var xmlDoc = XmlService.parse( txt );
var root = xmlDoc.getRootElement();
var atom = XmlService.getNamespace( 'http://www.opengis.net/kml/2.2' );
var xml2csv = [[height],[10]];
var labels = [], label = '', counter = 0, o = 0;
var documents = root.getChildren( 'Document', atom );
for( var i = 0; i < documents.length; i++ ) {
var Folders = documents[i].getChildren( 'Folder', atom );
for( var j = 0; j < Folders.length && j <= height; j++ ) {
if( Folders[j].getChild( 'name', atom ).getValue() == restrict ) {
var Placemarks = Folders[j].getChildren( 'Placemark', atom );
for( var k = 0; k < Placemarks.length; k++ ) {
var nodes = Placemarks[k].getChildren();
for( var l = 0; l < nodes.length; l++ ) {
var data = nodes[l].getChildren();
if( data.length > 0 ) {
for( var m = 0; m < data.length && counter <= width; m++ ) {
if( data[m].getAttribute( 'name' ) != null ) {
if( labels[ data[m].getAttribute( 'name' ).getValue().trim() ] == null ) {
labels[ data[m].getAttribute( 'name' ).getValue().trim() ] = counter;
xml2csv[ 0 ][ counter++ ] = data[m].getAttribute( 'name' ).getValue().trim();
}
}
if( data[m].getChild( 'value', atom ) != null ) {
xml2csv[ k + 1 ][ labels[ data[m].getAttribute( 'name' ).getValue().trim() ] ] = data[m].getChild( 'value', atom ).getValue().trim();
} else {
o = labels[ data[m].getName().trim() ];
if( o == null ) {
labels[ data[m].getName().trim() ] = counter;
o = counter;
xml2csv[ 0 ][ counter++ ] = data[m].getName().trim();
}
if( xml2csv[ k + 1 ] == null ) {
xml2csv[ k + 1 ] = new Array( counter );
}
xml2csv[ k + 1 ][ labels[ data[m].getName().trim() ] ] = data[m].getValue().trim();
}
}
} else {
if( label == '' )
label = nodes[l].getName().trim();
o = labels[ nodes[l].getName().trim() ];
if( o == null ) {
labels[ nodes[l].getName().trim() ] = counter;
o = counter;
xml2csv[ 0 ][ counter++ ] = nodes[l].getName().trim();
}
if( xml2csv[ k + 1 ] == null ) {
xml2csv[ k + 1 ] = new Array( counter );
}
xml2csv[ k + 1 ][ o ] = nodes[l].getValue().trim();
}
}
}
}
}
}
//cache.putAll( xml2csv, 1500 ); // cache for 25 minutes
return xml2csv;
}
Please note that code only handle specific kml data in its current form and may not be suitable for a wider audience.
This is the original method, my intent was to abstract away many of the specifics of the code, to improve readability, by placing the specifics in functions which performed the same action, but with a more readable name.
With this first method I achieve the desired behavior and rows[x] is properly added to lineRows.
def getAllRowsForLine( rows, index ) {
def lineRows = [rows[index]]
def newOperatorNotFound
def x = index + 1
if ( x <= rows.size() - 1 ) {
newOperatorNotFound = true
while ( x <= ( rows.size() - 1 ) && newOperatorNotFound ) {
if ( rows[x].PGM_PROC_OPE.trim() == "" ) {
lineRows << rows[x]
} else if ( rows[x].PGM_PROC_TY == "AN" || rows[x].PGM_PROC_TY == "OR" ) {
lineRows << rows[x]
}
else {
newOperatorNotFound = false
}
x++
}
}
return lineRows
}
Here is the refactored code along with the relative methods for context.
This method is not resulting in the desired behavior, breaking the loop after the first rows[x] is added to lineRows.
def getAllRowsForLine2( rows, index ) {
def lineRows = [rows[index]]
def newOperatorNotFound
def i = index + 1
if ( moreRows( rows, i ) ) {
newOperatorNotFound = true
while ( moreRows( rows, i ) && newOperatorNotFound ) {
if ( operatorEmpty( rows, index ) ) {
lineRows << rows[i]
}
else if ( procTypeAnd( rows, i ) || procTypeOr( rows, i ) ) {
lineRows << rows[i]
} else {
newOperatorNotFound = false
}
i++
}
}
return lineRows
}
def operatorEmpty( rows, index ) {
return rows[index].PGM_PROC_OPE.trim() == ""
}
def procTypeAnd( rows, index ) {
return rows[index].PGM_PROC_TY == "AN"
}
def procTypeOr( rows, index ) {
return rows[index].PGM_PROC_TY == "OR"
}
def moreRows( rows, index ) {
return index <= ( rows.size() - 1 )
}
As far as I can tell these things are equivalent. I ran the below code to attempt testing the equivalency of the functions and it returns true.
println lineProcessor.getAllRowsForLine( rows, 0 ) == lineProcessor.getAllRowsForLine2( rows, 0 )
=> true
Oops, I realized that I had used index in the operatorEmpty function instead of i. If I change index to i the function performs as expected.