When will Ethereum switch to Proof of Stake? [closed] - ethereum

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 months ago.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Ethereum currently uses GPU-mining proof-of-work to perpetuate the blockchain, but I've read that the Ethereum foundation and developer team aims to move to proof-of-stake at some point in the future. What is the difference between the two and when will the switch be made?

Right now, ethereum uses “proof of work” mining. This means miners use their graphics cards to essentially guess random numbers until somebody guesses the right number. Each guess is based on the past Ethereum transaction ledger and therefore represents a “vote” for what a miner believes is the “correct” chain. It is in miners’ economic self-interest to make guesses based on the “correct” chain because they won’t get rewarded (or rewarded as much) for guessing on the wrong chain. This is what keeps the ledger consensus intact.
The downside to PoW is the ridiculous energy usage it takes to keep all these graphics cards running 24/7. Proof of Stake (PoS) is a different type of mining based on ether holdings. Rather than graphic card hashpower representing a miner’s right to make guesses for the next block, their ether holdings do. No more graphics cards necessary.
A mechanism is built into the Ethereum protocol to make PoW (GPU) mining insanely difficult sometime in mid-2016, which will force miners to switch to Proof of Stake if they hope to stay competitive.
The main developer for the proposed proof-of-stake algorithm (CASPER) is Vlad Zamfir. Radio interview: https://www.reddit.com/r/ethereum/comments/3t2cph/vlad_zamfir_bringing_ethereum_towards/
Slide deck from DEVCON day 1: https://docs.google.com/presentation/d/1bV_vXJBko-DmhAgnOFYg8ZNbAvCZCZrlf0KBFPqwVIw
(The DEVCON day 1 video was removed by Youtube for some reason.)
EDIT: Ethereum entered the Homestead phase on 3/14/2016 and there's still Metropolis to go before Serenity, which is supposed to be the PoS "final" phase, so PoS in mid-2016 seems unrealistic. Here's the phases announcement from a year ago. My summary TL;DR summary is in the comments.
https://www.reddit.com/r/ethereum/comments/2xsin2/the_ethereum_launch_process_vinay_gupta/

According to Vitalik Buterin on r/ethereum, the difficulty bomb, which will make PoW mining impossible at some point, was slowed down with the homestead hardfork a bit.
As it turns out, with the change in the difficulty adjustment algorithm brought about in the last hardfork, the ice age will come very slowly indeed. Originally, the maximum amount by which the difficulty could adjust was 1/2048x, and so given a natural mining difficulty of ~2**45 (where it is now), after around block 3500000, it would go up faster than it goes down, and the protocol would quickly freeze. Now, difficulty can adjust down faster than that if the block time is slow enough, and so even after this point there is an equilibrium. At block 3.5m (1 year from now), we would have an equilibrium block time of 25s for 100k blocks (~1 month); then we would see 35s for 100k more blocks (now ~1.4 months); then ~55s for ~2.2 months, then ~95s for ~3.8 months, and so forth until we get ~655s for ~26 months (ie. slightly worse than bitcoin), and only after that does the protocol break because of the cap of ~99/2048 downward adjustment, and that final doom does not take place until 2021 (though it certainly gets very annoying by the second half of 2017).
TL;DR Blocktime will be annoying in second half of 2017 and the final doom happens somewhere in 2021. I expect the switch to proof of work not before Q3/2017.

Related

Software Security Protection with Hardware Dongle [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have read all the existing discussions on piracy and hardware support, so this is not the same old question. I have a new twist on this old discussion. You can now purchase dongles for USB that allow you to put some of your important code into the dongle. If you have a complex algorithm and you put it into the dongle, someone would have to reverse engineer the contents of the dongle. If they tried to spoof the dongle, as was possible in the past, this would not work. All they can see is that data goes into a "black box" and result data comes out. It is no longer a matter of finding a jump true/false to bypass a license check in the source code.
Perhaps a mathematician with a lot of idle time on his hands could eventually reverse it, but that is an extreme level of interest! The other option is that the hardware dongle itself would need to be hacked. There are many protections against this built in, but this is probably the most effective approach.
So I want to take a scenario and see if I've missed something. I put the important part of my algorithm into the dongle to protect it. 6 doubles and 1 int go into the dongle, 1 double and 1 int are returned. This happens for thousands of data points. This is one of several functions of similar complexity. A hacker can see the rest of my assembly code (which I do as much as possible to obfuscate), but lets assume it is easily hacked. My question is, how hard is it to break into the dongle to access my assembly code in this proprietary hardware? Let's take as an example this companies product: http://www.senselock.com
I am not interested in lectures on how I'm inconveniencing customers and should open source my product, please. I am looking for a technical discussion on how a software/hardware engineer might approach extracting my assembly object from such a device. And I am not asking in order to hack one, but to know how much hassle I have as my discouragement against tampering. I know if there is a will, there is always a way. But at first glance it looks like it would take several thousand dollars worth of effort to bypass this scheme?
Given the response so far, I am adding some more specifics. The dongle has the following property, "Access to the chip is protected by PIN, and the maximum re-tries is pre-set by software developers. For instance, under a dictionary attack, once the number of re-tries exceed the pre-set value, the chip will trigger a self-locking mechanism". So to access the chip and thus the code inside it, you have to know the PIN, otherwise after let's say 10 tries you will be locked out. I personally can't see any way anyone could compromise this system. It doesn't matter what goes in or out, what matters is what runs inside the dongle ARM processor. Physical forced access would destroy the chip. Electrical access would require the PIN, or the chip locks up. How else could it be compromised?
I pretty much agree with your point of view that all dongles could be hacked, it just the matter of time and cost. If your encryption scheme is well-designed the EAL 5+ chip should be secure enough to prevent your software form malicious attacks.
And I think if you can READ the dongle it's probably means you already hacked the dongle, or it proofs there is a fatal vulnerability in the encryption scheme.
BTW, the link you give above is not work. Are you referring to this dongle? http://www.senselock.com/en/productinfor.php?nid=180&id=142&pid=
there are companies(such as break-ic.com) which have the list of mcu which they can break.
after breaking they give you only hex files.
in this case(mcu)every manufacturer has its own disassembler because of hardware architecture of every mcu core and there is no guarantee that your desire disassembler is exist!!!
so you must search for dongles which they have unbreakable mcu or their mcu has no disassembler.
or you can build you own dongle!!

Empirically estimating a project duration [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
What are common empirical formulas that can produce a rough estimate of project duration for waterfall methodology ( up to 20% fluctuation is acceptable). If it helps in narrowing down the answer, you can assume that following is more or less known :
Number of devs is known and fixed, most devs are above average in terms of know-how, however some learning about domain-specific issues might be required.
Known and fixed max. number of app users.
Technology stack to be used is reasonably diverse (up to 4 different languages and up to 6 various platforms).
Interfacing to up to three legacy systems is expected.
Please feel free to provide estimate methods which cover a broader scope than the above points, they are just provided for basic guidance.
Do yourself a favor and pick up Steve McConnell's Software Estimation: Demystifying the Black Art. If you have access to past estimates and actuals this can greatly aid in producing a useful estimate. Otherwise I recommend this book and identifying a strategy from it most applicable to your situation.
Only expect to utilize 70% of your developers time. The other 30% will be spent in meetings, answering email, taking the elevator, etc. For example if they work 8hrs a day, they will only be able to code for 5.6 to 6.5 hours a day. Reduce this number if they work in a noisy environment where people are using the telephone.
Add 20% to any estimate a developer gives the project manager.
Lines of code is useless as a metric in estimating a project.
Success or failure depends on concise requirements from the customer. If the requirements aren't complete, count on the customer being not happy with the finished product.
Count on the fact that not all of the requirements will be dictated by the customer. There will be revisions to the requirements throughout the project.
Step 1. Create a schedule that is as granulated as is reasonably possible.
Step 2. Ask the people involved how long their features will take.
Step 3. Create an Excel spreadsheet which maps predictions to actual times.
Step 4. Repeat steps 1-3 for all new projects. Make use of an aggregated mapping from previous instances of step 3 to translate developer estimates to actual estimates.
Note that there are tools which can do this for you.
See also
Evidence-based-scheduling.
This project is not going to be cheap...
Number of devs is known and fixed,
most devs are above average in terms
of know-how, however some learning
about domain-specific issues might be
required.
This is a good thing. You don't want to flood the number of developers into the project. Though if you go above around 10 people, do count every 2 as only 1, as the rest will go up in overhead. Unless you can split the task into something that can be handled by two totally separate teams. Then you could have a chance of getting some traction.
Known and fixed max. number of app
users.
This means that you can with more certainty land your architecture early on, as you can estimate how much effort you must put into scaling your solution. This is a good thing. Make sure that you work within these limits and never ever fool yourself into thinking "it's fast enough". It almost never is if you doubt that it could be too slow...
Technology stack to be used is
reasonably diverse (up to 4 different
languages and up to 6 various
platforms).
This isn't as important as to do your people know this stack/set of languages? If there are any learning involved, raise the estimate x2 or x3 if you don't perform a proof of concept up front to learn the technology. Or even better, take the pain and get some coursing. If the language for implementation or technology to be used is unknown, then it is quite likely that you will misuse the technology and do things that will screw stuff up.
Make sure that the technology is proven or you'll end up getting bitten by it.
Are the source available for the tools/technology?
Do you get support?
Do you understand the product and or used it before?
Have the customer used it before?
If too many of these questions get a no, add some (or a lot of) additional time to the sum.
Interfacing to up to three legacy
systems is expected.
This is really a kicker. For legacy integration ask yourself:
Has anyone else integrated with them?
Do you have access to people with knowledge of these systems?
Do they intend to share this knowledge with you?
Do you have to wait for changes being created in these systems?
Are there test systems available for you to use?
Are there development systems available for you to use?
Again, if too many of these questions has a "no" on them, then be afraid. You should also know that actual integration takes about 3-5 times longer than you actually think.
This isn't a project that I would have given a table grabbing estimate for. Do yourself and your customer a favor and do this by the hour. If not, you will as times go by start cutting corners to cover up your lack of progress/underestimation... And both you and your customer will suffer.
There are many cost estimation software tools that can greatly ease the pain of cost estimation, we use ProjectCodeMeter. I know these tools are not perfect, but they do save time getting started by pointing towards the right direction.
Try this list of estimation tools on Wikipedia.

What is the Software Development Lifecycle? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Our investor wants a SDLC. I've never written one before, and I don't have enough time to go and buy a book, or spend much time learning about them. From what I've been told about them, they consist of requirements (what needs to be done), and a list is done. Is this correct?
Update:
I have found this article which really helps to explain things in simple terms and very quickly. Not that I think an SDLC should be done quickly. In my case, I have no other option.
There are lots of ideas about SDLC out there. You can't swing a cat without hitting one.
What have you done to develop software that attracted your investor in the first place? Can't you describe that? Why do you have to go out and "learn one"?
There's a number of choices:
Waterfall: requirements->design->build->test->deploy,
all in sequence
Iterative: similar to waterfall, but you break the design into smaller pieces, of 1-2 week duration, that are delivered at the end of the iteration.
Extreme Programming (XP): Kent Beck's approach; no BDUF (Big Design Up Front). Everything is designed, built, and delivered in small pieces.
Scrum: Agile, iterative, but not as dogmatic as XP.
Rational Unified Process: Waterfall from IBM.
Not really; that's more project management. That's what you need at the point where you've figured out how you're going to develop software.
For the 'how' of developing Software, the two 'biggies' are Agile and Waterfall; with a weird hybrid in between the two.
But that's only one part of the Software Development Life Cycle: You still have to have a maintenance and deployment plan.
My question for you: If someone's giving you money, and they want a plan, isn't it in your best interest to read a book about the SDLC and give them a plan?
If your investor wants you to describe the SDLC, he wants that you describe how looks the life of a software project which is done by you from its plan, birth, through growth to maturity and death. That is the reason why it has "life" in its name. The result of SDLC should be "software" hence the first word. The "development" part comes from the fact that you are responsible for planning, specifying, designing and implementation of the software, you should create (develop) the software. And finally, "cycle" means that, when the investor looks at you SDLC and thinks it is good (it produces quality and business value), he can ask you to use the same process once again in another project.
A complete SDLC meaning you need to perform Requirement gathering and Analysis-> Design (Design document creation) -> Coding and Unit testing -> Testing (System and Integrated testing) -> Deployment and Support -> Maintenance
I found this Blogg really helpful.

How do you manage non-user facing work in a strict scrum shop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We're a medium sized engineering shop (10-20). We are great at prioritizing and structuring work on our user facing stories and making customers happy. But the cobbler's children have no shoes. If it isn't about customers, we have 0 process.
I'm looking for systems to ensure we correctly prioritize and accomplish the non user facing work to keep a dev shop running: QA environments (pretty heavy, in our case), continuous integration systems, the packaging, and so forth.
Now, resources are always limited. We don't want to give the cobblers children 10 pair of the fanciest shoes, and specialized bike shoes to boot. We want to do the right, necessary work, with the same scrummy discipline that is applied to the rest of our development.
Tell me what system works for you: how to you prioritize and organize non-user facing work ? I want systems that are simple and integrate smoothly with scrum.
(I'm aware of a red box at the top of this text, indicating that Stack Overflow's automated question parser thinks this is a subjective question that can't be answered - I think there are likely 2 or 3 excellent answers that can be or have been proven viable - and process is integral to programming. So here is some psuedocode representing our process. Fix this algorithym).
IBacklog GetBacklogForWork(IWork requestedWork)
{
if(requestedWork.IsUserFacing) return new PrioritizedBacklogRepository();
// Everything else. Priority largely based on spare time and who thinks its a neat idea
return new RandomizedPriorityRepository();
}
void HandleIncomingSuggestionsForWork(IEnumerable(IWork) ideas)
{
foreach(work in ideas) GetBacklogForWork(work).Insert(work);
}
Someone involved is using and depending on the results of the project. This is necessarily true; if it weren't true, why would you be doing it?
When you identify the person who most depends upon, or cares most about, the results of the project, you have the "user" that your project is facing. Make that person the customer.
IMO something like "QA environments" is, in a sense, user-facing work.
Quality is admittedly a "non-functional" requirement (so there isn't necessarily an associated "story"). But, you may have a non-functional requirement like "the software must be tested before it's shipped". You can assign a relative priority to such a requirement ("how important is it that software be tested?"), and then execute as usual (decide how to implement that requirement, estimate how long it will take to implement, schedule the implementation, assign the implementation, etc.).
What we do where I work is to have a percentage, right now around 15% give or take a few percent, that is spent on internal tasks that are non-user facing work. This way the technical debt is handled and if the task backlog becomes rather large then a sprint may be spent on it instead of new functionality. The way that last one would get pitched to the user or customer is that there will be a time where just maintenance and preventive work is done so there aren't any new functions coming after the next sprint.
That's one idea that can be tweaked a bit though it isn't necessarily fully flushed out yet.
The way I've seen it work more or less ok is to try to do as much as possible of the non-functional/non-user-facing as PART of a related user-facing activity, or the first user-facing activity that requires it.
This is the easiest to cope with, as it just reflects the needs of the development organization in order to maintain sustainable velocity moving forward.
Additional work which cannot be related will be done using a percentage as described by JB King.
The alternative of pitching it as an investment with such and such ROI to the PO is a theoretically sexy concept, but with real life POs I've seen it rarely works.
Its very hard to get POs to understand the investment, not to mention actually being strong enough to delay functionality for it.
Its sometimes the difficult role of the development teams to be the guys that "slow things down" in order to keep a sustainable situation.
Dev managers sometimes feel really bad about this whole sitation, regardless of the chosen approach. My recommendation both as someone who's been in that spot, as well as an Agile coach, is that as long as you feel you are doing the right thing for the business, focusing on non-functional work that is required NOW and has a relatively quick ROI, you should feel ok about this.
Cautionary note: This is an area where self-organization is really put to the challenge. Organization needs to trust the team to do the right thing, and the team needs to earn and not abuse that trust. Its a sign of maturity for an individual or a team to know the right balance.

Automatically tracking development time [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm working on a personal project and I'd love to be able to say at the end:"I've spend X hours on this project". Now one way to solve this, is to use a manual time tracker (worked from: to:). I've ran into problems with this, because I only manage to use it consistently for the first week or two. So I'd like to track development time automatically.
One idea I had was to insert a short script into the build process that that would insert a time stamp into a log file every time a build process is called. Later, I could analyze the intervals between each build and hopefully calculate a somewhat accurate picture of what's going on.
Does anyone else have an idea of how such a time tracking tool could be implemented?
Quick follow up based on the answers already provided:
Stop/start trackers aren't bad, but require a lot of discipline, something that I perhaps should be working on. But they dont work for me.
Specific app-tracking programs are great, but I'm current on Mac OS X.
My opinion is that you would greatly benefit yourself in keeping a light-weight development journal. Notes, sketches, times, dates, etc, designs. It's not an answer to your question, but it is a discipline that few developers have and one that they desperately need.
Life is busy and people must learn to track / budget their time and discipline themselves to take on good behaviors and habits.
I encourage you to fight and win this battle. Don't compromise something so easy to automation when there are greater gains if you improve your skills. You might also want to check out LifeHacker for some ideas.
A bit of a non-answer, but I hope you find it helpful.
If you use source control you can use svn (or any other) hooks on commit and checkout that log timestamps to a db, etc when you check your project out and when you check it back in.
The trick to making this work - and it is easiest on single developer projects - is to MAKE SURE you check your work in when you are done working for a period of time, and that you check it out immediately prior to doing actual work.
This might not be feasible for your project. Build process checking etc suffers from the same issues - namely that you might work for 3 hours and then build 8 hours after that.
We wrote a plug-in for our IDE (IntelliJ in our case) that keeps track of time spent per project automatically. The IDE's API lets you list for events like edits, changing windows, etc., so we log a record every time something like that happens. The reporting module looks at this raw data and determines the total time spent per project by comparing timestamps between records. If the difference is greater than 5 minutes, it assumes no work was done during this time.
It's not perfect and it's not 100% accurate but you do eliminate all the overheard of manually tracking this stuff yourself through some external tool.