Where can I find information such as the latest block, hash difficulty, average block times, difficulty over time, mining pool sizes, etc.?
"status" can mean a few different things:
Real-time data: http://stats.ethdev.com
"historical" data (i.e. "x over time"): https://etherchain.org/statistics/basic
"historical" data alternative: http://etherscan.io/charts
Market cap: http://coinmarketcap.com/currencies/ethereum/
Price: https://poloniex.com/exchange#btc_eth
And, of course, you can get some data straight from the horse's mouth with any of the ethereum clients. Geth, for instance, will spit out information like so:
I1119 05:42:27.850525 3419 blockchain.go:1142] imported 1 block(s) (0 queued 0 ignored) including 2 txs in 10.879276ms. #564372 [cc3c199d / cc3c199d]
I1119 05:42:35.415481 3419 blockchain.go:1142] imported 1 block(s) (0 queued 0 ignored) including 2 txs in 8.48831ms. #564373 [fc24f250 / fc24f250]
This information includes the latest blocks, timestamps, # of txs and # of uncles.
Related
An address consumes 20,000 gas via SSTORE.
Given is a gas price of 35 Gwei.
If I store 10,000 addresses in a map, it will cost me:
20,000 gas * 10,000 = 200,000,000 gas
200,000,000 Gas * 35 Gwei = 7 Ether.
Is the calculation correct?
If I do the same on a layer2 chain, does the whole thing cost me 7 matic for example, or is there something else I need to consider?
Your calculation is correct.
I'm assuming you want to store the values in an array instead of 10k separate storage variables. If it's a dynamic-length array, you should also consider the cost of sstore while updating a (non-zero to non-zero) value of the slot holding the array length (currently 2900 gas for each .push() function resizing the array).
You should also consider the block gas limit - a transaction costing 200M gas is not going to fit into a block on probably any network, so any miner won't mine it.
So based on your use case, you might want to change the approach. For example, if the addresses are used for validation, you might be able to store just the merkle tree root (1 value instead of the 10k) and then validate against it using the address and its merkle proof.
I'm taking a look at the uniswapv2 tutorial walkthrough.
The followng is in reference to this function in the github repo and the tutorial states the following:
uint _kLast = kLast; // gas savings
The kLast state variable is located in storage, so it will have a
value between different calls to the contract. Access to storage is a
lot more expensive than access to the volatile memory that is released
when the function call to the contract ends, so we use an internal
variable to save on gas.
So in traditional programming, _kLast would be a reference to kLast. _kLast is referenced 3 more times after it's instantiation.
Had they used just kLast as the variable, and not assigned it to uint, would it cost a storage read each time kLast is used?
If this is NOT the case, then I really don't understand how they're saving on gas, can someone explain?
Each storage read (opcode sload) of the same slot costs 2,100 gas the first time during a transaction, and then 100 gas each other time during the same transaction. (After the EIP-2929 implemented in the Berlin hardfork. Before that, it was 800 for each read no matter how many times you performed the reads.)
Each memory write (opcode mstore) and each memory read (opcode mload) cost 3 gas.
So in traditional programming, _kLast would be a reference to kLast
In this particular Solidity snippet, _kLast is not a reference pointing to the storage. It's a memory variable that has value assigned from the storage variable.
So 3 storage reads - not creating the memory variable - would cost 2,300 gas (= 2,100 + 100 + 100).
But because the code creates the memory variable, it performs one storage read (2,100 gas), one memory write (3 gas), and three memory reads (3 x 3 gas) - totalling 2,112 gas. Which is a cheaper option.
Some other EVM-compatible networks, such as BSC, might still use the original gas calculation of 800 per each sload. Which would make even larger difference - non-optimized 2,400 gas (3 x 800), and optimized 812 gas (800 + 3 + 3x3).
In Jmeter v2.13, is there a way to capture Throughput via non-GUI/Command Line mode?
I have the jmeter.properties file configured to output via the Summariser and I'm also outputting another [more detailed] .csv results file.
call ..\..\binaries\apache-jmeter-2.13\bin\jmeter -n -t "API Performance.jmx" -l "performanceDetailedResults.csv"
The performanceDetailedResults.csv file provides:
timeStamp
elapsed time
responseCode
responseMessage
threadName
success
failureMessage
bytes sent
grpThreads
allThreads
Latency
However, no amount of tweaking the .properties file or the test itself seems to provide Throuput results like I get via the GUI's Summary Report's Save Table Data button.
All articles, postings, and blogs seem to indicate it wasn't possible without manual manipulation in a spreadsheet. But I'm hoping someone out there has figured out a way to do this with no, or minimal, manual manipulation as the client doesn't want to have to manually calculate the Throughput value each time.
It is calculated by JMeter Listeners so it isn't something you can enable via properties files. Same applies to other metrics which are calculated like:
Average response time
50, 90, 95, and 99 percentiles
Standard Deviation
Basically throughput is calculated as simple as dividing total number of requests by elapsed time.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time)
Hopefully it won't be too hard for you.
References:
Glossary #1
Glossary #2
Did you take a look at JMeter-Plugins?
This tool can generate aggregate report through commandline.
I'm using dygraphs to display weather data collected every ten minutes. One of the datapoints is snow depth (in meters). Once in a while the depth is wrong, 0 meters, where the previous and next is 0.9 meters. It's winter atm. and I've been on the location to verify 0.9 is correct.
With 47 datapoints at 0.9 m. and one at 0 m. standard deviation is approx. 0.13 (using ministat on FreeBSD).
I've looked through the dygraphs documentation but can't find a way to ignore values like 0 when the standard deviation is above a certain threshold.
This page on dygraphs have three examples on how to deal with standard deviation but I just want to ignore the 0, not use it with the option errorBar or customBar, and the data is not in fractions.
The option rollPeriod is not applicable since it merely averages x data points.
I fetch the weather data in xml-format from a third party every ten minutes in a cron job, parse the values and store them in postgresql. Then select the last 48 data points in another cron job and redirect the data to a csv-file which dygraphs consumes.
Can I have dygraphs ignore 0 if standard deviation is above a threshold?
I can get the standard deviation with ministat or other utility from the last cron job and remove the 0 from the cvs-file using sed/awk. But will prefer dygraphs to do that.
var g5 = new Dygraph(
document.getElementById("snow_depth"),
"snow.csv",
{
legend: 'always',
title: 'Snødjupne',
labels: ["", "djupne"],
ylabel: 'Meter'
}
);
First three lines of the csv-file.
2016-02-23 01:50:00+00,0.91
2016-02-23 02:00:00+00,0
2016-02-23 02:10:00+00,0.9
You should clean your data before you chart it! dygraphs is a charting tool, not a data processing library.
The best approach is to load your data via AJAX, parse it, filter out the zeros and feed it in to dygraphs using native format.
I am new to Cassandra and just run a cassandra cluster (version 1.2.8) with 5 nodes, and I have created several keyspaces and tables on there. However, I found all data are stored in one node (in the below output, I have replaced ip addresses by node numbers manually):
Datacenter: 105
==========
Address Rack Status State Load Owns Token
4
node-1 155 Up Normal 249.89 KB 100.00% 0
node-2 155 Up Normal 265.39 KB 0.00% 1
node-3 155 Up Normal 262.31 KB 0.00% 2
node-4 155 Up Normal 98.35 KB 0.00% 3
node-5 155 Up Normal 113.58 KB 0.00% 4
and in their cassandra.yaml files, I use all default settings except cluster_name, initial_token, endpoint_snitch, listen_address, rpc_address, seeds, and internode_compression. Below I list those non-ip address fields I modified:
endpoint_snitch: RackInferringSnitch
rpc_address: 0.0.0.0
seed_provider:
- class_name: org.apache.cassandra.locator.SimpleSeedProvider
parameters:
- seeds: "node-1, node-2"
internode_compression: none
and all nodes using the same seeds.
Can I know where I might do wrong in the config? And please feel free to let me know if any additional information is needed to figure out the problem.
Thank you!
If you are starting with Cassandra 1.2.8 you should try using the vnodes feature. Instead of setting the initial_token, uncomment # num_tokens: 256 in the cassandra.yaml, and leave initial_token blank, or comment it out. Then you don't have to calculate token positions. Each node will randomly assign itself 256 tokens, and your cluster will be mostly balanced (within a few %). Using vnodes will also mean that you don't have to "rebalance" you cluster every time you add or remove nodes.
See this blog post for a full description of vnodes and how they work:
http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
Your token assignment is the problem here. An assigned token are used determines the node's position in the ring and the range of data it stores. When you generate tokens the aim is to use up the entire range from 0 to (2^127 - 1). Tokens aren't id's like with mysql cluster where you have to increment them sequentially.
There is a tool on git that can help you calculate the tokens based on the size of your cluster.
Read this article to gain a deeper understanding of the tokens. And if you want to understand the meaning of the numbers that are generated check this article out.
You should provide a replication_factor when creating a keyspace:
CREATE KEYSPACE demodb
WITH REPLICATION = {'class' : 'SimpleStrategy', 'replication_factor': 3};
If you use DESCRIBE KEYSPACE x in cqlsh you'll see what replication_factor is currently set for your keyspace (I assume the answer is 1).
More details here