Given the vulnerability of all but hardware and paper wallets would anyone know how to use an unsupported hardware or paper wallet with Parity?
I simply want to be able to manually enter an address generated outside of Parity and fully expect to have to bypass the UI and edit a resource file to do so.
Any ideas?
You can use MyEtherWallet to generate a Paper wallet offline. To do that, you have to:
Download the latest release from Github. It's usually called etherwallet-v3.11.3.3.zip or something like that.
(Optional) You can use an USB stick to transfer the wallet to an offline, air-gapped, or live-Linux computer system.
On your computer, extract the archive into some directory and locate the index.html, open it with your favorite browser. Note, it does not require any working internet connection.
Enter a strong password, and make sure you do not forget it.
Generate your new Ethereum wallet, make sure you have redundant backups.
Consider printing it out.
If you ever want to restore your wallet to spend the funds, you can import it either in MyEtherWallet again, or in Parity. To import it in Parity without using the user interface, you have to:
Copy the JSON file into your Ethereum keystore directory on your machine:
On Windows: C:\Users\You\AppData\Roaming\Parity\Ethereum\keys\ethereum
On MacOS: /Users/you/Library/Application\ Support/io.parity.ethereum/keys/ethereum/
On Linux: /home/you/.local/share/io.parity.ethereum/keys/ethereum/
Upon start, Parity will recognize the imported key.
Disclosure: I work for Parity.
Related
I'm playing around in .netcore and attempting to make use of the user secret store, some details are here: https://docs.asp.net/en/latest/security/app-secrets.html
I'm getting along with it well enough when working locally, but I'm having trouble understanding how this could be utilized effectively in a team environment, and if I wanted to work on this project from more than one computer.
The store itself (at least by default) keeps its configuration json file within the users/appdata (on windows). This feature is good to use if you're uploading the project to github, to hide your API keys, connection strings etc. This is all great when it's just me, on one machine working on a project. But how does this work when working in a team environment, or on multiple machines? The only thing I can think of is to find the configuration file, check it into a private repo, and make sure to replace it in the correct directory when changes occur.
Is there another way to manage this that I'm not aware of?
As you already know, the Secret Manager tool is providing another method to avoid checking sensitive data into source control by adding this layer of control.
So, where should we store sensitive configuration instead? The location should obviously be separate from your source code and, more importantly, secure. It could be in a separate private repository, protected fileshare, document management system, etc.
Rather than finding and sharing the exact configuration file, however, I would suggest keeping a script (e.g. .bat file) that you would run on each machine to set your secrets. For example:
dotnet user-secrets set MySecret1 ValueOfMySecret1 --project c:\work\WebApp1
dotnet user-secrets set MySecret2 ValueOfMySecret2 --project c:\work\WebApp1
This would be more portable between machines and avoid the hassle of knowing where to find and copy the config files themselves.
Also, for these settings, consider whether you need them to be the same across all developers in your team. For local development, I would normally want to have control to install, use, and name resources differently than others in my team. Of course, this depends on your situation and preferences, and I see reasons to share them too.
I've an embedded system which runs firmware and has USB mass storage with size 79kB. So when you plug in the device to any computer(MAC/Windows), it pops as a 79kB flash drive. The firmware creates files which has transaction records. The objective is to display these transactions (tables and simple graphs) to the user. I've narrowed down to a web browser. So the user (with MAC/Windows PC) can plug in the USB device mass storage and open an HTML file in the mass storage drive and view all the transactions in the form of tables and simple bar graphs. The tricky part comes here: the device(firmware) needs to update it's clock, and this time input has to be sourced from the MAC/Windows PC. How can this be achieved?
This is the minimum requirement. Further, through the web browser the user wants to write some configuration parameters for e.g. through a text box and a submit button in the HTML page.
NOTE: Here the device has USB mass storage type and the web browser approach were selected so that there is no prerequisites for the user.
Please suggest an alternative if this can be done using another approach for e.g. a different class of USB or some other application locally available on MAC/Windows desktop/laptop. For e.g. the application should run on both on Mac and Windows i.e. the code should be the same but can be built into separate packages one for Mac and the other (.exe) for Windows. Please suggest a platform for this that has same source but can be built for both mac and windows. Thanks!
As far as I know, there is no way a web browser could write to a file. If such a thing was possible, it would be a huge security issue.
You have to write a piece of native software to do all the tasks you name. That can be done in pretty much any programming language, and if you're developing embedded systems I reckon you must have some experience in programming.
I'm looking at doing something similar and have an idea, though you may be better equipped to run with it than I am. Have the define contain a directory called "SET_DATE" with files "YEAR15" through "YEAR99", "MON01" through "MON12", "DATE01" through "DATE31", "H00" through "H23", "M00" through "M59", "S00" through "S59", and "SET"; each such file should start at a different sector, though none of the sectors in question need to contain any data (they need not physically be stored anywhere). To set the date to July 4, 2020 at 12:34:56pm, read the following files in sequence:
SET_DATE/YEAR20
SET_DATE/MONTH07
SET_DATE/DATE04
SET_DATE/H12
SET_DATE/M34
SET_DATE/S56
SET_DATE/SET
The last access should cause the unit to set its clock. If a user might want to set the clock more than once, that could be accommodated by either having a bunch of essentially-identical directories under SET_DATE (so setting the date the first time would use SET_DATE/00/YEAR20, the second time SET_DATE/01/YEAR20, etc.) and/or having the drive unmount/remount itself if necessary to clear out any caching.
I would think it unwise to have directory fetches trigger actions, since Windows or an anti-virus tool might decide to pre-cache all the directories in a drive when it is mounted. I would not expect Windows or a browser to eagerly load files, however, so I would think one could have read accesses trigger actions.
I have an application that people use through Remote Desktop/Terminal Server. The application supports digital signatures. Well, the digital signature pad is on the client, but the program runs on the server. The signature pad also does not support being shared as a device through Remote Desktop(not listed with "Supported Plug And Play Devices" in local resources).
What is the best way of being able to send the signature to the server from the client machine? Preferably with having the least amount of setup for the users(there are a lot of clients and a fair amount of servers this must be done for)
My best idea so far is sharing the clipboard and using it to send messages from server to client(with the client application "polling" the clipboard for a special clipboard format) I feel like this may not be very fast or stable though as I don't think Remote Desktop was designed for it.
Also, we are open to [reasonable] language choices like C/C++, C#, Delphi(the application is written in this), etc. Also, the signature pad is a Topaz TS460(connects by USB).
Can anyone give me ideas on how this can be done or if the clipboard idea of mine is probably the best?
tl;dr: What is the best way of sending an image from a client to a server through remote desktop?
Update:
Well, I've done a bit of testing with plain ASCII text(I can't get files to transfer) and it seems that there is problems copying large amounts of text. I tried copying 43M of text and after a long period of waiting I just got an empty clipboard(Like it did a paste, but there was no text pasted) I was able to transfer about 2M of data though (at decent speeds) between server and client, so this may be feasible for signature images(which will be either jpeg or png compressed)
Have you looked into using Remote Desktop Virtual Channels? http://msdn.microsoft.com/en-us/library/aa383509(VS.85).aspx
for topaz signature pad and credit card swiper you will need the serial type. It will work, already tried it. but I guess this question is too old for me to answer. Does IPAD as well as well as other tablets work on terminal and citrix setup?
I have not tried with Remote Desktop, but one thing that comes to mind is installing a good macro tool on the client. AutoHotKey ( http://www.autohotkey.com/ ) is a free tool that lets you create runable scripts that can do things like open applications and send key strokes to them.
I'm not sure how well it would work with remote desktop, but I know for certain that you can easily setup a script that would launch an application, send it "key strokes" to generate data, copy the data to the clipboard, switch to another application and then paste in the data.
When AutoHotKey is installed, you have the option of associating the file types of the scripts with the app so that end users could just double click your scripts desktop icon to run it. No command line messyness for them.
If all you need to do is transfer an amount of data (a file) from the client to the server it is fairly easy. Polling for a file seems also more logical as polling via the clipboard.
When you connect the client should enable sharing a harddisk (at least one). You can specify the options every time you connect, or you can send the client a .RDP file that is preconfigured.
If you can get the user to put the file on a fixed position, you can access the file C:\Shared \File.jpg using a path like \tsclient\c\Shared\File.jpg.
Here's an explanation (with nice screenshot) how to copy files with Remote Desktop:
http://www.jakeludington.com/ask_jake/20051218_copying_files_with_remote_desktop.html
I wasn't sure if your question rules already out this approach or not.
Consider the following scanning procedure in a typical document handling webapp:
The user scans a document using a scanner connected to his/her computer
The scanned image is saved locally on the user's computer as a BMP/JPG/TIF/PNG file
The user hits a file upload "Browse.." button in the web application
The user is presented with a file dialog which he/she uses to locate the scanned image
The user hits "Upload image" and the scanned image is uploaded to the server where it is stored
This process is quite complicated and I'd like to reduce the number of steps in order to make the process more user friendly/fool proof. Under ideal circumstances the above steps would be replaced with only one step in which the procedure initiate document scanning, complete document scanning and upload resulting image is automatically triggered from the webapp when clicking say "Scan and upload". Unfortunely it seems like the state of "web/scanner integration" is quite poor so this might be utopia.
How would you tackle this problem? More specifically, how would you go about reducing the number steps involve in the use-case described?
Well, two years have passed, so here's an update on the state of the art for those just joining us.
Both Dynamsoft and Atalasoft have multi-browser web-scanning toolkits which are compatible with any server-side stack. Both require the user to install an ActiveX (in IE) or an NPAPI plugin (Chrome, Firefox, etc.) to get access to the scanner via the TWAIN API.
Obviously if you have the time or a limited budget, you can create your own plugin. I heartily recommend the FireBreath plugin framework, and any TWAIN library rather than writing your own TWAIN code.
Once the ActiveX or plugin is installed, the rest of the work is a combination of javascript & HTML on the client, and some kind of handler on the server to accept and process the incoming image, which can be made to look just like a multipart form submit with an attached file.
I recommend doing the image upload in javascript using AJAX, because it is then part of the same browser 'session' as the web page, and it inherits the browser's proxy settings, session cookies and server-side authentication. I don't know about Dynamsoft's control, the Atalasoft toolkit includes such AJAX uploading. The image(s) are handed from the plugin to the javascript as a base64-encoded string, so no local file is actually created.
Disclaimer: I work on Atalasoft's WingScan web-scanning toolkit.
If your target audience is running Windows and IE, and you don't mind spending a few $$, Atalasoft has some components that will do just what you're looking for.
I actually saw someone at the bank do this while setting up my account and I was totally amazed. Bank in question was using Windows and IE, I assume your in an equally controlled environment. I think the bank used a combination of a custom/ predictable scanner driver and an ActiveX control.
A page loaded which said "Open the scanner" the staff member popped the document in and hit Scan on the webpage, then the page changed to say Scanning, then it showed the scanned document on the web page for the staff member to Approve. I can only assume that the scanner driver send the image to a certain location and the active X control was polling for it to appear, once it appeared it showed the image on screen, once the staff member had approved it the active x uploaded it in the background. She opened the next page and carried on with the rest of the process.
God knows how they made all that tech work but it can be done.
Silverlight 4 is coming out soon. It is supposed to have the ability to interact with COM objects on the user's computer (provided they are running Windows). In theory you call WIA methods from your Silverlight web page.
We implemented a solution to implement Remote Deposit for a bank. It works only in IE. A winforms dll was created that interfaces with LeadTools TWAIN dll. Leadtools TWAIN dll abstracts all the TWAIN minutae. This approach is slighly better than using an ActiveX control. .NET Framework would be needed on client. The scanned images are posted back to a hidden variable on the page and are processed on the server.
Hmm, I've always wanted to look at a scanned file before I did anything with it, but I suppose that depends on your scanner and how much quality you need.
If the goal is to "automate the scanning and uploading process" as opposed to "write a web app", I'd write an AutoIt script to control the existing scanner software and a simple ftp program.
The option most likely to remove the most steps, would probably be writing a customized scan utility that the user would download and run on their local machine.
SANE or TWAIN would handle getting the scanned image. cURL could than handle uploading the image to your web app. To make things even easier for the end user, I would use something like a Comet connection to update the web page when the file was available.
If that isn't an option, you might look into seeing what options your users will likely have using their scanners software. I believe many programs now support scanning to email or ftp.
The solution I have used for an intranet app, using multifunction scanner/copiers was to scan to an SMB share that the web server had access to. The user just goes to the copier scans to the share and when they get back to their desk, they go to the new scans page which shows a list of all the new unprocessed files.
Since your audience is controlled environment, You can write your own browser extension/program based on WIA/TWAIN that does the scanning. If you choose browser extensions such as BHO/ActiveX/XPCOM, etc, you need get the user's permission to install your extension. If you choose to write a program you may need web deployment technologies like ClickOnce or Java Web Start to be launched from web.
Interfacing TWAIN is a pain on Windows. Complexity aside, you have to display some GUI written by different scanner driver developers. It may be the only way to support old scanners or features not exposed via other interfaces like full-speed multipage scans from a document feeder.
Microsoft's WIA makes interfacing with scanner much easier with a scripting object model, however scanner-specific features are not available and some old scanners do not support the interface.
After scanning you can call a web service to notify the server and the web page can refresh periodically to check new images.
We have done something similar. we used a command-line TWAIN program (http://www.burrotech.com/quickscan.php). $$ $49
1) We developed a small .Net application to run the QuickScan program as a shell command.
2) The command was assigned to the Scan button.
3) Once the user presses on the scan button, a prompt will appear to enter the file name. The user saves the transaction Id as the file name.
4) Another .Net application (or maybe the same mentioned before) will read this file and upload it into database considering that the filename is the transaction ID.
Worked like a warm knife in butter!
You can try displaying the transaction ID into IE, user to select the ID then presses Scan. Your application will read the SELECTED text and save the file using the SELECTED text as the file name. We havne't tried it but it should work.
It is only utopia if you think that web applications are limited to web browsers, in fact, web applications can include a lot of different technologies, besides HTML and Javascript.
The cool way of solving that problem -- in fact, I already used that for some usbserial devices -- is to implement your application using SOAP+XMPP. You can do that in Perl by using XML::CompileX::Transport::SOAPXMPP, Catalyst::Engine::XMPP2, Catalyst::Controller::SOAP and Catalyst::Model::SOAP.
The interesting thing about using XMPP is that it simplifies the management of addressing, since you use the JID (Jabber ID) to look for the software agent, not some host+port addressing schema. The second interesting part of using XMPP is to more easily support the server pushing information to the client.
But if you don't want to handle XMPP you still can do the same thing with a lightweight embedded http server -- HTTP::Server::Simple, in Perl -- and somehow register the current scanner address in the server so it can call back.
And a last option, which is not so cute, is to have the software agent polling the server to see when there is a "scan document and upload" order for that specific machine and realize that operation when that is present.
In summary, having a local software agent to interact with the local hardware doesn't make your webapp less "web", as long as you use web standards -- like XML, SOAP and others -- to perform that communication.
You can put a Java applet in your website. This can access the scanner and send the data via REST to your web server.
My bank website has a security feature that let me register the machines that are allowed to make banking transactions. If someone steals my password, he won't be able to transfer my money from his computer. Only my personal computers are allowed to make transcations from my account. So...
What are the approaches to restrict the access to a group of machines in a web system?
In other words, how to identify the computer who made the http request in the web server?
Why not using a clients certificate inside the certificate store of an authorized host or inside a cryptographic token such as smartcard that can be plugged into any desired computer?
Update: You should take into account that uniquely identifying a computer means obtaining something that is at a relative low level, unaccessable to code embeded in an html page (Javascript, not signed applet or activeX), unless you install something in the desired computer (or executing something signed such as an applet or activeX).
One thing that is unique per computer is the MAC address of the Ethernet card, that is almost ubiquitous on every rather modern (and not so modern) computer. However that couldn't be secure enough since many cards allow changing its MAC address.
Pentium III used to have an unique serial number inside CPU, that could fit perfect for your use. The downside is that no newer CPUs come with such a thing due to privacy concerns from most users.
You could also combine many elements of the computer such as CPU id (model, speed, etc.), motherboard model, hard disk space, memory installed and so on. I think Windows XP used to gather such type of information to feed a hash to uniquely identify a computer for activation purposes.
Update 2: Hard disks also come with serial numbers that can be retrieved by software. Here is an example of how to get it for activation purposes (your case). However it will work if sb takes the HD to another computer. Nonetheless you can still combine it with more unique data from computer (such as MAC address as I said before). I would also add a unique key generated for a user and kept in a database of your own would (that could be retrieved online from a server) along with the rest to feed a hash function that identifies the system.
Did you actually install something?
Over and above what Mark Brittingham mentions about IP addresses, I suppose some kind of hash code that is known only to your bank's computer and your computer(s) would work, provided you installed something. However, if you don't have a very strong password to begin with, what would stop someone from "registering" their computer to steal money from you?
I would guess your bank was doing it by using a trusted applet - my bank used to have a similar approach (honestly I thought it was a bit of a hassle - now they're using a calculator-like code generator instead). The trusted applet has access to your file system, so it can write some sort of identifier to a file on your system and retrieve this later.
A tutorial on using trusted applets.
I'm thinking about using Gears to store locally a hash-something to flag that the computer is registered.
If you are looking for the IP address of the computer that makes an account-creation request, you can easily pull that from the Request. In ASP.NET, you'd use:
string IPAddress = Request.UserHostAddress;
You could then store that with the account record and only accept logins for that account from that IP address. The problem, of course, is that this will not work for a public site at all. Most people come through an ISP that assigns IP addresses dynamically. Even with an always-on internet connection, the ISP will occasionally drop and re-open the connection, resulting in a change of IP address.
Anyway, is this what you are looking for?
Update: if you are looking to register a specific computer, have you considered using cookies? The drawback, of course, is that someone may clear their cookies and thus "unregister" their computer. The problem is, the web only has so much access to your computer (not much) so there is no fool-proof way to "register" a computer. Even if you install an ActiveX control, they could uninstall or delete it (although this is more persistent than a cookie). In the end, you'll always have to provide the end-user with some method for re-registering. And, if you do that, then you might as well have then log in anyway.