Software Security Protection with Hardware Dongle [closed] - reverse-engineering

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have read all the existing discussions on piracy and hardware support, so this is not the same old question. I have a new twist on this old discussion. You can now purchase dongles for USB that allow you to put some of your important code into the dongle. If you have a complex algorithm and you put it into the dongle, someone would have to reverse engineer the contents of the dongle. If they tried to spoof the dongle, as was possible in the past, this would not work. All they can see is that data goes into a "black box" and result data comes out. It is no longer a matter of finding a jump true/false to bypass a license check in the source code.
Perhaps a mathematician with a lot of idle time on his hands could eventually reverse it, but that is an extreme level of interest! The other option is that the hardware dongle itself would need to be hacked. There are many protections against this built in, but this is probably the most effective approach.
So I want to take a scenario and see if I've missed something. I put the important part of my algorithm into the dongle to protect it. 6 doubles and 1 int go into the dongle, 1 double and 1 int are returned. This happens for thousands of data points. This is one of several functions of similar complexity. A hacker can see the rest of my assembly code (which I do as much as possible to obfuscate), but lets assume it is easily hacked. My question is, how hard is it to break into the dongle to access my assembly code in this proprietary hardware? Let's take as an example this companies product: http://www.senselock.com
I am not interested in lectures on how I'm inconveniencing customers and should open source my product, please. I am looking for a technical discussion on how a software/hardware engineer might approach extracting my assembly object from such a device. And I am not asking in order to hack one, but to know how much hassle I have as my discouragement against tampering. I know if there is a will, there is always a way. But at first glance it looks like it would take several thousand dollars worth of effort to bypass this scheme?
Given the response so far, I am adding some more specifics. The dongle has the following property, "Access to the chip is protected by PIN, and the maximum re-tries is pre-set by software developers. For instance, under a dictionary attack, once the number of re-tries exceed the pre-set value, the chip will trigger a self-locking mechanism". So to access the chip and thus the code inside it, you have to know the PIN, otherwise after let's say 10 tries you will be locked out. I personally can't see any way anyone could compromise this system. It doesn't matter what goes in or out, what matters is what runs inside the dongle ARM processor. Physical forced access would destroy the chip. Electrical access would require the PIN, or the chip locks up. How else could it be compromised?

I pretty much agree with your point of view that all dongles could be hacked, it just the matter of time and cost. If your encryption scheme is well-designed the EAL 5+ chip should be secure enough to prevent your software form malicious attacks.
And I think if you can READ the dongle it's probably means you already hacked the dongle, or it proofs there is a fatal vulnerability in the encryption scheme.
BTW, the link you give above is not work. Are you referring to this dongle? http://www.senselock.com/en/productinfor.php?nid=180&id=142&pid=

there are companies(such as break-ic.com) which have the list of mcu which they can break.
after breaking they give you only hex files.
in this case(mcu)every manufacturer has its own disassembler because of hardware architecture of every mcu core and there is no guarantee that your desire disassembler is exist!!!
so you must search for dongles which they have unbreakable mcu or their mcu has no disassembler.
or you can build you own dongle!!

Related

User stories for integration in Scrum [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am working on a project that has very complex integration needs, specifically with receiving and sending EDI data and all the "fun" stuff that happens in between. I can definitely focus efforts around data processing (validation, required fields, transformation), but the problem I am having is how to frame stories and epics in the backlog to plan and track work.
It is very easy to say "As a manager, I can deny a vacation request so that I can make sure that I have enough workers on staff to meet my commitments." Actually, I am very very good at this, but I am very new to this kind of integration effort.
For a big integration project, it is tougher to indicate who the user is, and what the value is. The EDI integration are just interface (non-functional) requirements, but the implementation is a big effort.
Can anyone provide some guidance on how to structure / frame these kinds of requirements in the product backlog I am creating?
Mike Cohn has something to say about this, I think this last paragraph is very relevant
But, you should be careful not to get obsessed with that template. It’s a thinking tool only. Trying to put a constraint into this template is a good exercise as it helps make sure you understand who wants what and why. If you end up with a confusingly worded statement, drop the template. If you can’t find a way to word the constraint, just write the constraint in whatever way feels natural
Scrum does not specify that requirements should be written as user stories and you should use what ever technique best works for you. If you are battling with "AS A" type stories, try "IN ORDER TO AS A I WANT ". If that does not use, use use case modeling.
Requriments are not contracts, but placeholders for communication. The key here is to have just enough information for planning purposes giving the team a sense of knowing what has to be done. The details can be discussed in sprint.
What I do in situations like this is:
1) Try and come up with the simplest bit of end-to-end functionality that we can implement for the integration.
2) Try and come up with a use case for that integration
3) Translate that into stories (optional step: Stories aren't a law of physics. You use 'em if they're useful.)
For example:
1) Okay - looks like authentication is the most trivial thing to implement that touches everything.
2) Hey - authentication by itself is useful. We can use it to know whether this group of users can access the data.
3) "As a site administrator I want to make sure that only authorised stuff have access to Foo to prevent valuable information being publicly accessible"
This way you'll always have a working EDI system - it just cover a subset of the functionality. A subset you can grow over time - hopefully in order of the importance of the functionality to your business.
My real preference however would be to dig a bit further in to why the EDI is being done. Generally it won't be because "EDI" is a feature that people want. It'll be because the EDI is necessary for some other bit of functionality in the system.
In which case, rather than having a separate EDI project, I would much rather use whatever the thing-that-needs-EDI is to drive the development of the EDI layer. The stories in (3) above will then be coming from a live project - and you'll be much more likely to build what you need and avoid waste.

Open Source: How to motivate translators to bring localizations from 70% to 100% before release? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I maintain an Open Source Android app. Every once in a while, some anonymous hero localizes to its mother tongue, sending files or using our online tool.
At first I thought the magic of collaboration would be enough to provide timely localizations, but actually, the UI strings change, and each release ships with roughly:
5 languages localized at 100%
8 languages localized at 70% (because recent strings have not been localized)
I am extremely thankful, but is there something I can do to bring the localizations from 70% to 100% when the release comes?
I send messages on the mailing list at code freeze a month before each release, and then a week before, but most of the people who contributed localizations don't read the mailing list, in fact most of them were probably just good samaritans passing by.
Should I stalk the translators and ask them personally?
I have been thinking about having a person responsible for each language. This person (how should I call the role?) would be "responsible" for bringing the translation to 100% before each release. Their names would be listed in the "About" dialog. Is it a good or a bad strategy? Any tips?
It's the 90-9-1 principle - http://www.90-9-1.com/ - and designating people in charge of things isn't going to do it. You can offer cash - rewards/pay - or you can groom them. If you bring money into the mix, remember that it quickly becomes tradeoff analysis. People will compare what you offer versus what they can earn on their own. Since I assume you don't have that much money, you don't want that sort of comparison.
Realistically, grooming them is a better option. You've done the first step - include them - show that their fixes, updates, etc are included and help the product. The next step is publicly thanking them and appreciating them. That will get the first 80% as you've seen. The next step is getting them personally committed. Start interacting with them directly. Send them your thanks.. not just in email. If you have a product t-shirt, send them one with a hand written note. In your release notes, link to their sites. If you ever see them in person, buy their coffee.. whatever. The point is that you go out of your way - however small - to acknowledge that they're going out of their own way.
I'm the Project Lead of the Open Source Project Management System web2project and have been doing this longer than I care to consider. ;)
Use both a carrot and a stick approach. Giving people credit in your "About" screen, and calling them the lead translator for a particular language (if they're willing to accept that responsibility) is a good thing. Don't include a translation until you've gotten someone to commit to being the lead translator for that language; just like with code, you don't want to accept contributions unless (a) you are able and willing to maintain those contributions or (b) someone who you consider reliable has volunteered to be responsible for maintaining it.
The "stick" is that if someone stops maintaining a language, and fails to appropriately pass the responsibility off to someone else, you will remove that language from the next version of your application. Most people who translate your app probably do so because they would prefer to use your app in their native language. The threat of removing the language from the next version, or the wake-up call when they find that the next version doesn't contain their language, might inspire them to come back and finish up the last 30% translation work.
You are getting it wrong, community translation is an ongoing process or lets say never ending approach.
This doesn't meant that is wrong, but you have to live with the fact that localization is always a compromise and that Usually, it is better to have 20 languages with 50% translation than having 10 languages 100% translated.
If one language is important, it will have more users, so it will have more contributors and the translation rate will be greater.
You don't know when and if the translation for a specific language will be 100%, probably never.
The good part is that you shouldn't care about obtaining 100%, you should spend your effort in motivating people to contribute.
Probably you already know, community translation doesn't play well with packed products.
The solution to this problem is to change the way you work and provide partial translations in your release and a simple update system that updates them (preferably a silent one, like the Chromium update).
PS. If you need to assure a 100% translation rate for a limited number of languages before your release, consider paying a translation vendor to do it.
Well, you get what you paid for. The strategy that you are planing to implement will not give you anything. That's because heroes have their own lives. The only way to get this done, is to engage more people to translate, because completion percentage is a function of a community size.
If your application is useful enough, you may try to give a link "help to translate into your language" and this could do for a while.
Think of creating discussion group or board that you could post "Call for translation" when you are ready to ship. Translators usually don't have a time to track changes.
One thing to note is that localization people need time to do the localizations, and the less they're getting paid for it, the longer they need. This means that you should at least try to avoid changing the localization keys (especially including defining new ones) shortly before a release. I know it's nice to not be constrained like that, but the reality is that if you're not paying then you're not going to get a fast turnaround in the majority of cases, so you have to plan for this and mitigate in your schedule. In effect, you have to stop thinking about the text shown to a user as something that can be finalized late in the project and instead start treating it as an important interface matter that you'll plan to lock down much earlier.

Open sourcing a commercial site [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am building a 'software as a service' website that will be charging users a small monthly fee. I am considering changing the Github repository over from Private to Public. Essentially open sourcing it. Is this suicide? I would like the community to be able to benefit from the code. It is unlikely that I will accept any push request so I'm not going to gain anything in that regard. It is community based, so I think most of the value would be lost by someone self hosting it. It is for a very niche audience so I doubt someone else will start a competing hosting. I would really like the code to be in the open, but not at the expense of my idea of course. How does everyone else feel about this? What is common practice?
Conclusion, I'm keeping it closed for the time being. I may look to open source sometime after launch however.
Since you are not going to accept pushes you might as well hold on get your code stable and then publish it for others to learn and benefit from. You are still building the service, so its not going to attract too many eyeballs either.
From a business point of you, you might want to have a reasonable community around your service before you opensource it. if you are still budding who knows if its taken up by a stronger competitor. If your idea is patented its a different story.
To be honest, and this is not likely going to be a popular answer, but to myself, I would keep it closed for a period of time.
The reasons for this are simple, establish your foothold in the marketplace, build your userbase, your brand, then it gives you a mechanism to market your product further by selectively or completely open sourcing components of your system.
I say do it for both personal benefit and potential strategic benefit ... afterall, alot of software IS a service
Most open-source projects stand to provide a return in the right circumstances. Don't forget, unless you have a patent or some massive advance that is so complex and unfathomable that nobody can re-implement it .. if they want to they will anyway, so you have little protection staying closed source anyway ... even more interesting is that the open-source equivalent may well overtake your proprietary one if it garners support.
People may send you great ideas you never thought of, or take your codebase in a direction you would not have predicted. Unless you have significant value in terms of IP or strategic position tied up in the source code ... releasing it will probably do more good than harm.
Also, by being first to the open-source arena with your code, you gain control over any resulting community driven development ... if somone reimplemented your functionality and went open source ... could you compete on any front?
I know it is a cliche, but probably for good reason, but read The Cathedral and the Bazaar and the essay Open Source as a Signalling Device - An Economic Analysis which is an interesting read. Michael E. Porter's texts on competition analysis are interesting when held up against the mixed value economics and competitive forces of open source and shows how disruptive open-sourcing a product can be to competitors ... and how it can add value to your market position. Also, whilst counterintuative, it can raise the barriers to a successful entry by competitors.
More food for thought on the advantages and disadvantages of open sourcing:
What the DoD thinks of open source
Alfred H. Essa "Innovation and strategic advantage: lessons from open source" (warning, journal link)
I like to fix flaws wherever I see them, and perhaps I am one of your users. I'd rather send a patch than send a potentially nagging-sounding email any day.
What benefit are you hoping to gain from making the code open source? If you don't want the input of other developers then there are very few advantages and a whole lot of potential disadvantages.

Why aren't voting machines open source? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Sooo...it's only sort of programming related, but I figure it's election day, right? Is there a single good reason why they aren't, not necessarily open source in that anyone can contribute, but open source in that anyone could inspect the source?
Voting machines aren't open-source because lobbyists for the "electrical till" industry successfully hoodwinked politicians not qualified to make technology choices into buying their snake-oil. This was accomplished with a mix of anti-FOSS FUD and good ol' fashioned bribery campaign contributions.
Update: I will try to post links here from time to time that show how vendors respond to critical examination. Feel free to add your own. (Pro-OSS–only: "the man" can make his own post!)
Interesting Email from Sequoia
In Belgium, the sourcecode for the voting machines is freely downloadable.
In the context of this discussion, you might find this paper interesting:
Secret-Ballot Receipts: True Voter-Verifiable Elections
It's written by David Chaum, the cryptographer responsible for DigiCash, among other things. From his bio page on Wikipedia, I also found End-to-end auditable voting systems.
Update! Now it seems we can see if this really works: First Test for Election Cryptography.
Looking back in time now, I've read a couple of articles on the experiment in Takoma Park, and this system actually seems different from the one described in the original paper. However, it is still by David Chaum, and still supports the end-to-end audit properties. The system is called Scantegrity II.
The reason they aren't open source, is because, as Kent mentioned, it wouldn't help. You could open source the code. But there's no way to ensure that the voting machine you are using is actually running the code that is open sourced.
There is no reason that open source code is better than closed source in this case.
How you voted must always remain a secret for obvious reasons. The ONLY real safeguard is the paper trail.
I WORKED with these machines and if so inclined I would have made malicious code that flips votes the way I wanted after 10 cast ballots to defeat whatever ridiculous Logic and Accuracy tests were thrown at the machine before deployment (We never went past one test vote).
Randomly pick a certain percentage of machines and compare the paper trail to the electronic tally. If Diebold had been confident of its machines then they would have insisted that this be the last step in any election.
Security Through Obscurity!
the problem is opensourcing the software would be a no-op.
They don't have any decent cryptography, and there has been demonstrated and relatively easy ways to contravene them simply by hot-swapping a ROM chip in the voting booth, or Having a device that augments the records in the record cartridge.
Youtube: Sequoia Part 1 Those with access can hack with programmed ROM chip
Youtube: Sequoia Part 2 Logic and Accuracy Test vs Election Mode with vote-stealing firmware
Youtube: Sequoia Part 5 Manipulating Sequoia Voting Results Cartridges from Precincts
#Mnementh The bad cryptography and the possibility to swap the ROM-chip has nothing
to do with open-sourcing the code? So there is the point?
There are only 3 logical reasons for opensourcing this code:
To put under scrutiny how the votes are counted to be certain its doing it right.
For somebody to be able to modify that code for their own needs.
To put the software into public domain so public committers can improve on it.
Points 1 and 3 are blown out of the water in terms of usefulness and "proving your vote counts" because you have no assurance that the code you are seeing/improving runs on these devices.
So that leaves only condition 2 being useful, and as you are not going to own your own voting machine, and have no need for one for anything more than nefarious causes or to simply prove their vulnerability.
For the majority of cases all it would mean is that there would be more information publically available on how to contravene these machines, so you would no longer need physical access to one in order to attempt reverse engineer their software and develop compromised ROM chips for use in said devices, grossly reducing the barrier to entry for the compromise of the voting system.
Granted, even in a non-opensource state this information can still leak, and you just have a false sense of security because you assume "theres no leak, I am safe", but on the contrary, if you open source it people will assume "hundreds of people have looked at the source code, I am safe" which is an equally bad false sense of security.
People are looking for a silver bullet safe way of voting, and sadly, there is none. Not without growing a race of purified peoples whom are brought up by non-committal monks in isolationist shrines to have a breed of people simply for the task of witnessing and counting votes accurately, whom are trained to be amoral and can't be bribed to switch the vote.
( It would sort of be like the 'dark angel' series except with voting agents instead of assassins, and we all know how that show works out, one of them would go rouge, we'd trust them, and they'd screw us all )
Because politicians buy them. Anything politicians get their hands in goes to shit, because 99% of the time they're only experience is in running for office, not doing things like adequately vetting hardware and software.
Also, kickbacks.
The truth hurts, doesn't it?
There is no specific reason not to open-source the software (and even opening the hardware-layout) of voting machines. It has no security impact, as some try to state, because if closed or open source, the ROM can be switched. The machine need some sort of verifier to check, if the code loaded is really the one certified for the election. Open-Sourcing would make no difference.
Because if they were they would not be able to blame inaccurate votes on calibration-errors on the touchscreen.
The people responsible have a "security by obscurity" bad meme stuck somewhere
The people building the software don't want to help competitors
The people building the software fear embarrassment
There are not enough people in the legislative process who understand the flaws in all of the above
So far, most replies have been technical in nature, but most likely, voting machines are not open source because the company under contract to develop them has no incentive to make them open source.
If a company develops an open source voting system, anyone came come around later to support that system. And, quite honestly, I doubt the government would accept the equivalent of a SourceForge project as the basis for an entire election.
Perhaps there should be an honest-broker authority that oversees the development of an open-source voting system, and contributors to that system should be vetted before they can view or commit source code.

How to measure "usability" in a specification-requirements document? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Starting to look at my last year project now, and so I'm doing the specification-requirements document. Now, it just so happens that this project requires a high degree of "usability" - I dunno if this is the right word in english, but what I mean is that it should be really easy to use from a user PoV. Now - in all the projects I've worked on so far, usability haven't really been a great factor, and so I could just write some gibberish to get around it. I always asked our teachers how they would specify the requirements of usability though, but no one have yet given me an answer I felt was good enough.
Our teachers have always preached that any requirement given on a project should be "test-able", but how do you test how easily accessable your user-interface is?
Say I had a real-time application running. Here it wouldnt be too hard to say "an entry should be deleted in less then 100ms after the initial call". But it's a lot harder to say "The userinterface should be 86% intuitive".
I guess this is a tough nut to crack, but surely I can't be the first person in the world to have thought about this, let alone having problems with it.
... how do you test how easily accessible your user-interface is?
With usability tests.
Basically, you grab a bunch of your friends (because you won't have any money to encourage strangers to participate) give them the documentation a new user would have and ask them to perform the system's key use cases.
Ideally, you want your test users to have at least some of the qualities of your target users, so if your system is aimed at a technical audience then your classmates will work; however, if your system is aimed at the general public then you're going to want to get your friends in Arts, Human Kinetics, etc. to participate.
So how do you turn that into requirements? You identify your key use cases and stipulate what how usable should they be (walk-up usable, a few minutes with the documentation, real actual training...) and then verify that your test subjects can complete the use cases without too much frustration, with the right amount of training, in a reasonable time.
Try to define usability in terms of "test-able" requirements.
You already gave yourself the answer, because usability can be described in terms of requirements like "an entry should be deleted in less then 100ms after the initial call".
What makes a user interface 86% intuitive? This can’t be answered without some form of measurement. You need to ask what features can make a user interface intuitive. Talk to the customer and the potential future users. Gather features (or better dig for them!), which would make working with the piece of software easier.
Maybe you get a list of features like:
The department’s organization must be
shown in a hierarchical tree view.
In this tree view drag & drop must be
possible.
The font size must be
configurable and saved for each user.
On the top of the screen there must
be a list of important links. Each
user may configure and save his own
personal list.
...
Make requirements out of these features. They are “testable” and thus “measureable”. If in the acceptance tests it turns out that 17 out of 20 features are working, you have 85% success.
EDIT: This works in a project environment, where you need to deliver measurements (like in many commercial projects). If you have a "softer" form of project environment where not everybody is poking on figures, then sticking too much on this formalism may be counterproductive since flexibility and agility may suffer from this.
I would advise you against quantifying usability requirements. The problem is not so much that you can't define metrics. You could say, for instance, that
it should take a person not longer than x seconds to find y on the site, or
the conversion rate of the store has to be higher than z%
etc etc
The problem is rather that you have to spend time and resources on finding acceptable target values for your metrics that you can actually reach. What is an acceptable time to find a piece of content?