I defined the prices for shipment as follows
As you can see, for any oder with a total weight in the range [0, 1] Kg the shipment costs must be 4€. Unfortunately, for an oder of that form the shipment costs are just 3.36€
There is some workaround how can I set up the price the backendend so that 4€ would be showed in the frontend:
4€ in backend ------> 3.36€ in frontend
amount in backend ------> 4€ in frontend
This implies: 3.36 x amount = 4 x 4 => amount = 16 / 3.36 <=> amount = 4.76€
I tested that and it really gives 4€ in the frontend, but I am not sure whether I am compatible with the most suitable way in Shopware.
I appreciate your helps.
Is it possible that 4€ is with the taxes of 19% and 3.36€ is without the taxes of 19% (on the second image you have the total of the taxes ).
Kind regards
Related
I have sales, advertising spend and price data for 10 brands of same industry from 2013-2018. I want to develop an equation to predict 2019 sales.
The variables I have are (price & ad spend by type) :PricePerUnit Magazine, News, Outdoor, Broadcasting, Print.
The confusion I have is I am not sure whether to run regression using only 2018 data with 2018 sales as Target variable and adding additional variable like Past_2Yeas_Sales(2016-17) to above price & ad spend variables (For clarity-Refer the image of data). With this type of data I will have a sample size of only 10 as there are only 10 brands. This I think is too low for linear regression to give correct results.
Second option (which will increase sample size) I figure is could be instead of having a brand as an observation, I take brand+year as an observation which will increase my sample size to 60- for e.g. Brand A has 6 observations like A-2013, A-2014, A-2014...,A-2018, B has B-2013,B-2014..B-2018 and so on for 10 brands(Refer image for data).
Is the second option valid way to run regression? What is the right way to run regression in such situations of small sample size?
I'm smashing my head from hours trying to achieve an exponential increase. I tried with second degree equation but the result is not what i expected. Let me explain.
I have a pay per use service based on credit balance. Users can upload funds anytime to their accounts. I want to incentivize clients to upload more funds to avoid micro-payments and higher transaction fees. To make it short the more you pay, the more bonus you get. For example an user could deposit 100$ in his account in two ways:
Jack: 10 transactions 10$ each
Paul: 2 transactions 50$ each
Mark: 1 transaction of 100$
Paul should get an higher bonus than Jack but less compared to Mark. Now I'm trying with this function.
for($amount=10;$amount<=5000;$amount +=10)
{
$rate=0.005;
$bonus=exp($rate*$amount);
echo "Deposit ".$amount."$ and gain ".$bonus." bonus<br>";
}
Minimum deposit is 10$ and Maximum deposit is 5000$ that's why i loop from 10 to 5000. The problem is simple.
Deposit 10$ and gain 1.05127109638 bonus
...
Deposit 100$ and gain 1.6487212707 bonus
...
Deposit 1000$ and gain 148.413159103 bonus
...
Deposit 2500$ and gain 268337.286521 bonus
...
Deposit 5000$ and gain 72004899337.4 bonus
You get a too little for small amounts and way too much for big amounts. I've also tried with different ranges for example 10 to 100$ with a certain rate, 200 to 1000 with another (...) but of course when you deposit an amount that is near to these limits you get less bonus. It's not logical.
Deposit 1000$ and gain 54.5981500331 bonus
... here starts the next range ...
Deposit 1250$ and gain 42.5210820001 bonus
I've also tried this approach:
function foo($wanted=1000, $rangeLow=10, $rangeHigh=5000){
$increment=($rangeHigh-$rangeLow)/($wanted+1);
$r=array();
for ($i=$rangeLow+$increment;$i<$rangeHigh;$i+=$increment)
$r[]=$i;
return $r;
}
I "spread" 1000 bonus points between 10$ and 5000$ but with this I get a linear increase while I need an exponential one. Probably the solution is mixing both approaches but I don't know how to.
I think you want the pow function instead of exp
pow Returns base raised to the power of exp.
exp Returns e raised to the power of arg.
e.g.
for($amount=10;$amount<=5000;$amount +=10)
{
$rate=0.05; //adjust as needed, 0.08 makes 5000 -> about x2
$bonus=pow($amount, $rate);
echo "Deposit ".$amount."$ and gain ".$bonus." bonus<br>\n";
}
Outputs:
Deposit 10$ and gain 1.122018454302 bonus
Deposit 20$ and gain 1.1615863496415 bonus
Deposit 30$ and gain 1.1853758165593 bonus
..
Deposit 1000$ and gain 1.4125375446228 bonus
..
Deposit 5000$ and gain 1.5309059120639 bonus
Don't you just need to keep your exp function but multiply it by a value less than 1? If I've got my maths right, that should flatten the exponential curve.
I finally managed to complete this task. It's not something that you can achieve only using PHP. You also need an equation otherwise you have no controls over the increases.
In my case i have an equation of parabola with 3 points given where have to define the lowest and the highest value. I wanted to give a maximum bonus of 1000 and a minimum of 0.1 for imports from 10 to 5000$. I did it on Excel with matrix product of a matrix inversion of values. Anyway I'm not going to share the whole formula since it's too specific. I prefer to share with you the PHP-side of the story.
I wanted to "convert" my Excel formula in PHP but it took me too much time to code matrix calculations then i decided look for an existing library. Pear has a Math_Matrix package. If you don't want to recompile your webserver to get Pear, you can probably use this PHP class. I tested matrix inversions and it works.
Said that i suggest you not to use any of the above solutions unless you use matrix math frequently. It's more convenient and efficient to you calculate all necessary values on Excel or with the "old" pen and paper so that you can pass to PHP simple calculations like +, -, *, ^2 etc.
Researching the possibility of using bitwise comparison to assess what options have been selected out a possible 100 options.
now as an integer the selection of all options would require storage of an integer of 2 to the power of 99 (6E29). way beyond the limit of circa 9E18.
just as with dir permissions ( 1 = read, 2= write, 4 = execute) 1+2+4 = 7 = full access.
I would like to know which of the 100 options have been chosen by the same method.
Any advice/tips much appreciated.
NB storage will be mysql
-- EDIT --
The end goal here is to simplify a check as to what currencies a user can be paid in.
assigning values to currency like so:
Currency OptVal
GBP 1
USD 2
EUR 4
AUD 8
CAD 16
ZAR 32
and so on (there are many many currencies and more will arise through crypto currencies I'm sure)
it would then be convenient to check which currencies a user has using bitwise operators...
so if a user had currency setting of 3 only GBP and USD.
5 GBP & EUR
63 GBP,USD,EUR,AUD,CAD,ZAR
and so on - hope this clarifies the goal.
The issue is to do this in its most simplistic form of storing that integer when you have > 100 currencies. you need a value 2E(n-1) for each option and for large n this number is very large and not storable as an integer (BIGINT Max value is 18446744073709551615)
You want advice. Don't do it this way.
MySQL offers the boolean data type, which is convenient for flags. Each value does occupy one byte, so the storage will be larger than using bits.
MySQL also offers the bit() data type, where you can put together up to 64 bits. You can read about them here.
Using built-in data types is simply the right way to go. They protect your from changes of server, from upgrades on the OS, and the possibility that your application and server have different endian-ness (if you don't know what this is, then you definitely should not be thinking about bit fiddling).
The good news is that there are data types for what you want to do.
I have to track the stock of individual parts and kits (assemblies) and can't find a satisfactory way of doing this.
Sample bogus and hyper simplified database:
Table prod:
prodID 1
prodName Flux capacitor
prodCost 900
prodPrice 1350 (900*1.5)
prodStock 3
-
prodID 2
prodName Mr Fusion
prodCost 300
prodPrice 600 (300*2)
prodStock 2
-
prodID 3
prodName Time travel kit
prodCost 1200 (900+300)
prodPrice 1560 (1200*1.3)
prodStock 2
Table rels
relID 1
relSrc 1 (Flux capacitor)
relType 4 (is a subpart of)
relDst 3 (Time travel kit)
-
relID 2
relSrc 2 (Mr Fusion)
relType 4 (is a subpart of)
relDst 3 (Time travel kit)
prodPrice: it's calculated based on the cost but not in a linear way. In this example for costs of 500 or less, the markup is a 200%. For costs of 500-1000 the markup is 150%. For costs of 1000+ the markup is 130%
That's why the time travel kit is much cheaper than the individual parts
prodStock: here is my problem. I can sell kits or the individual parts, So the stock of the kits is virtual.
The problem when I buy:
Some providers sell me the Time Travel kit as a whole (with one barcode) and some sells me the individual parts (with a different barcode)
So when I load the stock I don't know how to impute it.
The problem when I sell:
If I only sell kits, calculate the stock would be easy: "I have 3 Flux capacitors and 2 Mr Fusions, so I have 2 Time travel kits and a Flux Capacitor"
But I can sell Kits or individual parts. So, I have to track the stock of the individual parts and the possible kits at the same time (and I have to compensate for the sell price)
Probably this is really simple, but I can't see a simple solution.
Resuming: I have to find a way of tracking the stock and the database/program is the one who has to do it (I cant ask the clerk to correct the stock)
I'm using php+MySql. But this is more a logical problem than a programing one
Update: Sadly Eagle's solution wont work.
the relationships can and are recursive (one kit uses another kit)
There are kit that does use more than one of the same part (2 flux capacitors + 1 Mr Fusion)
I really need to store a value for the stock of the kit. The same database is used for the web page where users want to buy the parts. And I should show the avaliable stock (otherwise they wont even try to buy). And can't afford to calculate the stock on every user search on the web page
But I liked the idea of a boolean marking the stock as virtual
Okay, well first of all since the prodStock for the Time travel kit is virtual, you cannot store it in the database, it will essentially be a calculated field. It would probably help if you had a boolean on the table which says if the prodStock is calculated or not. I'll pretend as though you had this field in the table and I'll call it isKit for now (where TRUE implies it's a kit and the prodStock should be calculated).
Now to calculate the amount of each item that is in stock:
select p.prodID, p.prodName, p.prodCost, p.prodPrice, p.prodStock from prod p where not isKit
union all
select p.prodID, p.prodName, p.prodCost, p.prodPrice, min(c.prodStock) as prodStock
from
prod p
inner join rels r on (p.prodID = r.relDst and r.relType = 4)
inner join prod c on (r.relSrc = c.prodID and not c.isKit)
where p.isKit
group by p.prodID, p.prodName, p.prodCost, p.prodPrice
I used the alias c for the second prod to stand for 'component'. I explicitly wrote not c.isKit since this won't work recursively. union all is used rather than union for effeciency reasons, since they will both return the same results.
Caveats:
This won't work recursively (e.g. if
a kit requires components from
another kit).
This only works on kits
that require only one of a particular
item (e.g. if a time travel kit were
to require 2 flux capacitors and 1
Mr. Fusion, this wouldn't work).
I didn't test this so there may be minor syntax errors.
This only calculates the prodStock field; to do the other fields you would need similar logic.
If your query is much more complicated than what I assumed, I apologize, but I hope that this can help you find a solution that will work.
As for how to handle the data when you buy a kit, this assumes you would store the prodStock in only the component parts. So for example if you purchase a time machine from a supplier, instead of increasing the prodStock on the time machine product, you would increase it on the flux capacitor and the Mr. fusion.
This is for a new feature on http://cssfingerprint.com (see /about for general info).
The feature looks up the sites you've visited in a database of site demographics, and tries to guess what your demographic stats are based on that.
All my demgraphics are in 0..1 probability format, not ratios or absolute numbers or the like.
Essentially, you have a large number of data points that each tend you towards their own demographics. However, just taking the average is poor, because it means that by adding in a lot of generic data, the number goes down.
For example, suppose you've visited sites S0..S50. All except S0 are 48% female; S0 is 100% male. If I'm guessing your gender, I want to have a value close to 100%, not just the 49% that a straight average would give.
Also, consider that most demographics (i.e. everything other than gender) does not have the average at 50%. For example, the average probability of having kids 0-17 is ~37%. The more a given site's demographics are different from this average (e.g. maybe it's a site for parents, or for child-free people), the more it should count in my guess of your status.
What's the best way to calculate this?
For extra credit: what's the best way to calculate this, that is also cheap & easy to do in mysql?
ETA: I think that something approximating what I want is Φ(AVG(z-score ^ 2, sign preserved)). But I'm not sure if this is a good weighting function.
(Φ is the standard normal distribution function - http://en.wikipedia.org/wiki/Standard_normal_distribution#Definition)
A good framework for these kinds of calculations is Bayesian inference. You have a prior distribution of the demographics - eg 50% male, 37% childless, etc. Preferrably, you would have it multivariately: 10% male childless 0-17 Caucasian ..., but you can start with one-at-a-time.
After this prior each site contributes new information about the likelihood of a demographic category, and you get the posterior estimate which informs your final guess. Using some independence assumptions the updating formula is as follows:
posterior odds = (prior odds) * (site likelihood ratio),
where odds = p/(1-p) and the likelihood ratio is a multiplier modifying the odds after visiting the site. There are various formulas for it, but in this case I would just use the above formula for the general population and the site's population to calculate it.
For example, for a site that has 35% of its visitors in the "under 20" agegroup, which represents 20% of the population, the site likelihood ratio would be
LR = (0.35/0.65) / (0.2/0.8) = 2.154
so visiting this site would raise the odds of being "under 20" 2.154-fold.
A site that is 100% male would have an infinite LR, but you would probably want to limit it somewhat by, say, using only 99.9% male. A site that is 50% male would have an LR of 1, so it would not contribute any information on gender distribution.
Suppose you start knowing nothing about a person - his or her odds of being "under 20" are 0.2/0.8 = 0.25. Suppose the first site has an LR=2.154 for this outcome - now the odds of being "under 20" becomes 0.25*(2.154) = 0.538 (corresponding to the probability of 35%). If the second site has the same LR, the posterior odds become 1.16, which is already 54%, etc. (probability = odds/(1+odds)). At the end you would pick the category with the highest posterior probability.
There are loads of caveats with these calculations - for example, the assumption of independence likely being wrong, but it can provide a good start.
The naive Bayesian formula for you case looks like this:
SELECT probability
FROM (
SELECT #apriori := CAST(#apriori * ratio / (#apriori * ratio + (1 - #apriori) * (1 - ratio)) AS DECIMAL(30, 30)) AS probability,
#step := #step + 1 AS step
FROM (
SELECT #apriori := 0.5,
#step := 0
) vars,
(
SELECT 0.99 AS ratio
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
UNION ALL
SELECT 0.48
) q
) q2
ORDER BY
step DESC
LIMIT 1
Quick 'n' dirty: get a male score by multiplying the male probabilities, and a female score by multiplying the female probabilities. Predict the larger. (Actually, don't multiply; sum the log of each probability instead.) I think this is a maximum likelihood estimator if you make the right (highly unrealistic) assumptions.
The standard formula for calculating the weighted mean is given in this question and this question
I think you could look into these approaches and then work out how you calculate your weights.
In your gender example above you could adopt something along the lines of a set of weights {1, ..., 0 , ..., 1} which is a linear decrease from 0 to 1 for gender values of 0% male to 50% and then a corresponding increase up to 100%. If you want the effect to be skewed in favour of the outlying values then you easily come up with a exponential or trigonometric function that provides a different set of weights. If you wanted to then a normal distribution curve will also do the trick.