How to implement a minimum swap size on Uniswap V2? - ethereum

How would one implement a minimum swap size feature on a Uniswap V2 clone? Ideally the minimum is implemented in $, as in, a minimum of $1000 worth of the input must be exchanged for the output asset (or vice-versa).

Related

Solidity/Ethereum. Cheaper alternative regarding gas

I am learning solidity/ethereum and I came across this situation:
I have a mapping(address => unit) that keeps track of how much every address is paying my contract, and at some point, I have to compute how much % of the total pool has one user contributed with. (for example, if the total pool is 100 ethers and user contributed 10 ethers, he has contributed with 10% of the total pool).
In order to do so, I need to have access to the total pool. My first instinct was to have a variable totalPool which will keep track of the total value, therefore every time an address is paying the contract, totalPool += msg.value; However, while learning about the EVM, I kept reading how expensive it is to operate on the storage.
My question is, what is cheaper in terms of gas, to keep track of the total pool and operate on memory every time an address pays the contract, or to compute the total pool every time when I need to find out the ratio contribution?
From what I understand of your use case, your first instinct is probably the simplest and the best solution unless you have an easy way to compute the total pool. You have to keep in mind that in solidity, it is impossible to loop over the elements of a mapping to sum them up. So unless it is possible to calculate the size of your pool using other variables that would be stored anyway, the total pool variable is most likely the best way to keep track of the pool size.
I highly recommend that you test as many implementations as you can come up with. Both the ethers.js and web3.js libraries have functions that allows you to test how much gas should be required to execute a transaction.

Does increase in number of mining pool decrease the block generation time

Can any one clarify whether (or, not) an increase in mining pool in Ethereum network with decrease the average block generation time? (For example, if another pool like ethermine join the network today and start mining). Since all the pools are competing with each other, I am getting confused
No, block generation times are driven by the current difficulty for solving the the algorithm used in the Proof of Work model. Only when a solution is found, is the block accepted to the chain and the difficulty determines how long it will take to find that solution. This difficulty automatically adjusts to speed up or slow down block generation times.
From the mining section of the Ethereum wiki:
The proof of work algorithm used is called Ethash (a modified version of Dagger-Hashimoto) involves finding a nonce input to the algorithm so that the result is below a certain threshold depending on the difficulty. The point in PoW algorithms is that there is no better strategy to find such a nonce than enumerating the possibilities while verification of a solution is trivial and cheap. If outputs have a uniform distribution, then we can guarantee that on average the time needed to find a nonce depends on the difficulty threshold, making it possible to control the time of finding a new block just by manipulating difficulty.
The difficulty dynamically adjusts so that on average one block is produced by the entire network every 12 seconds.
(Note that the current block generation time is closer to 15 seconds. You can find the block generation times on Etherscan)

Arrangement of logic gates on an IC?

I'm just curious how data is physically transferred through logic gates. For example, does the pixel on my monitor that is 684 pixels down and 327 pixels to the right have a specific set or path of transistors in the GPU that only care about populating that pixel with the correct color? Or is it more random?
Here is a cell library en.wikipedia.org/wiki/Standard_cell that is used when building a chip for a particular foundry, kind of like an instruction set used when compiling. the machine code for arm is different from x86 but the same code can be compiled for either (if you have a compiler for that language for each of course). So there is a list of standard functions (and, or, etc plus more complicated ones) that you compile your verilog/vhdl for. A particular cell is hardwired. There is an intimate relationship between the cell library and the foundry and the process used (28nm, 22nm, 14nm, etc). Basically you need to construct the chip one thin layer at a time using a photographic like process, the specific semiconductors and other factors for a specific piece of equipment may vary from some other, so the 28nm technology may be different than 14nm so you may need to construct an AND gate differently thus a different cell library. And that doesnt necessarily mean there is only one AND gate cell for a particular process at a particular foundry, it is possible that more than one has been developed.
as far has how pixels and video works, there is a memory somewhere, generally on the video card itself. Depending on the screen size, number of colors, etc that memory can be organized differently. Also there may be multiple frames used to avoid flicker and provide a higher frame rate. so you might have one screens image at address 0x000000 in this memory the video card will extract the pixel data starting at this address, while software is generating the next frame at say 0x100000.
then when it is time to switch frames based on the frame rate the logic may switch to displaying the image using 0x100000 while software modifies 0x000000. So for a particular video mode the first three bytes in the memory at some known offset could be the pixel data for the 0,0 coordinate pixel then the next three for 1,0, and so on. For a number like 684 they could start the second line at offset 684*3, but they might start the second at 0x400.
Whatever, for a particular mode, the OFFSET in a frame of video memory will be the same for a particular pixel so long as the mode settings dont change. The video card due to the rules of the interface used (vga, hdmi, or interfaces specific to a phone lcd for example) has logic that reads that memory and generates the right pulses or analog level signal for each pixel.

Google Compute Engine auto scaling based on queue length

We host our infrastructure on Google Compute Engine and are looking into Autoscaling for groups of instances. We do a lot of batch processing of binary data from a queue. In our case, this means:
When a worker is processing data the CPU is always 100%
When the queue is empty we want to terminate all workers
Depending on the length of the queue we want a certain amount of workers
However I'm finding it hard to figure out a way to auto-scale this on Google Compute Engine because they appear to scale on instance-only metrics such as CPU. From the documentation:
Not all custom metrics can be used by the autoscaler. To choose a
valid custom metric, the metric must have all of the following
properties:
The metric must be a per-instance metric.
The metric must be a valid utilization metric, which means that data from the metric can be used to proportionally scale up or down
the number of virtual machines.
If I'm reading the documentation properly this makes it hard to use the auto scaling on a global queue length?
Backup solutions
Write a simple auto-scale handler using the Google Cloud API to create or destroy new workers using Instances API
Write a simple auto-scale handler using instance groups and then manually insert/remove instances using the InstanceGroups: insert
Write a simple auto-scaling handler using InstangeGroupManagers: resize
Create a custom per-instance metric which measures len(queue)/len(workers) on all workers
As of February 2018 (Beta) this is possible via "Per-group metrics" in stackdriver.
Per-group metrics allow autoscaling with a standard or custom metric
that does not export per-instance utilization data. Instead, the group
scales based on a value that applies to the whole group and
corresponds to how much work is available for the group or how busy
the group is. The group scales based on the fluctuation of that group
metric value and the configuration that you define.
More information at https://cloud.google.com/compute/docs/autoscaler/scaling-stackdriver-monitoring-metrics#per_group_metrics
The how-to is too long to post here.
As far as I understand this is not implemented yet (as at January 2016). At the moment autoscaling is only targeted at web serving scenarios, where you want to serve web pages/other web services from your machines and keep some reasonable headroom (e.g. in terms of CPU or other metrics) for spikes in traffic. Then the system will adjust the number of instances/VMs to match your target.
You are looking for autoscaling for batch processing scenarios, and this is not catered for at the moment.

Use the measurement standards or give users the choice

I deal with measurements in my application (height, weight, etc). All of the equations I've found use the international standards (kg, cm). I can easily do the conversions in the code, but I'm wondering if I give users the option or do I make them do the conversions themselves if they don't wish to use the standard?
Some similar programs I've seen (from the U.S.) only allow feet and inches for height and pounds for weight.
Without more (that is, whether your application's users prefer metric over U.S. units or vice versa, and probably the
geographic scope of those users, or whether one unit system is more "authoritative" than another), it would be appropriate to give users the choice of measurement system.
If you want to use both international (kg and cm) and U.S. standards (feet, inches, and pounds), you should try defining
a base unit, which is the smallest unit representable in both unit systems, and perform operations on those base units rather
than on metric or U.S. units. See this question: Metric and Imperial internal representation
This can be important if, for example, your application receives or measures heights and weights that could be in either metric or U.S.
units.