I have some questions regarding the functioning of IPFS. These are the following:
if I am the only one that has the Merkle root hash of an uploaded file on IPFS, is it infeasible for other peers to download/find this same file?
related to point 1., can I see all the files that have been uploaded on IPFS from every peers in the network? If yes, how?
once uploaded a file on IPFS, this is split in chunks and these are given to the peers in the network. In which scenario is it possible to 'lose' the file? Can the fact that several peers in the network go 'offline' forever be a security problem for IPFS?
Is there a way to allow only specific peers to have access to a specific file stored on IPFS?
Ad 1: As long there is at least one peer with the content, other peers should be able to (eventually, it may take time) to discover its address via DHT and connect to it (as long it has a public and dialable address). A primer on networking challenges can be found in Core Course - Solving distributed networking problems with libp2p
Ad 2: You can't see everything in realtime (you'd have to be connected to every peer in existence), but you can crawl the public DHT. See how its done in https://github.com/raulk/dht-hawk or https://github.com/ipfs-search/ipfs-search/
Ad 3: Chunks are never "pushed" to other peers. Peers need to request them. Content is available on the network as long as peers provide it. You can "lose" file if nobody wants to host it. More on this in Core Course - The Lifecycle of Data in DWeb
Ad 4: There is no access control at a file level. You can set up a private network in which only Peers that know a secret key can connect to each other and exchange data.
For future reference, a much better place for asking this type of generic questions is IPFS Community Forum at https://discuss.ipfs.io/c/help
Related
I have two or three sets of Azure credentials for Work, Work Admin, and Personal. This morning, I clicked the wrong login credential during an interactive login while doing some local development. My local dev app now has an identity of me#company.com, when I need my identity to actually be me#admin.com. Because I clicked the wrong identity, my application immediately starts getting obvious authorization errors.
My implementation is pretty naive right now, and I'm relying on the Python Azure SDK to realize when it needs to be logged in, and to perform that login without any explicit code on my end. This has worked great so far, being able to do interactive login, while using the Azure-provided creds when deployed.
How can I get my local dev application to forget the identity that it has and prompt me to perform a new interactive login?
Things I've tried:
Turning the app off and back on again. The credentials are cached somewhere, I gather, and rebooting the app is ineffective.
Scouring Azure docs. I may not know the magic word, and as a consequence many search results have to do with authentication for users logging into my app, which isn't relevant.
az logout did not appear to change whatever cache my app is using for it's credential token.
Switching python virtual environments. I thought perhaps the credential would be stored in a place specific to this instance of the azure-sdk library, but no dice.
Scouring the azure.identity python package. I gather this package may be involved, but don't see how I can find and destroy the credential cache, or any out way to log out.
Deleting ~/.azure. The python code continued to use the same credential it had prior. ~/.azure must be for the az cli, not the SDK.
Found it! The AzureML SDK appears to be storing auth credentials in ~/.azureml/auth/.
Deleting the ~/.azureml directory (which didn't seem to have anything else in it anyway) did the trick.
Python garbage collector provides access to unreachable objects that the collector found but cannot free. Since the collector supplements the reference counting already used in Python, you can disable the collector if you are sure your program does not create reference cycles. Refer here
You can use the weak reference to an object is not enough to keep the object alive: when the only remaining references to a referent are weak references, garbage collection is free to destroy the referent and reuse its memory for something else. However, until the object is destroyed the weak reference may return the object even if there are no strong references to it.
Refer here for using Weak Reference reference 1 & reference 2
can someone help to configure JXTA
??
I'm not understanding what type of uri to place as the rendez vous
Noor, you are using an old version of JXTA. The window you see has been completely removed from the code base in 2.7. It is not used in 2.6 too. JXTA is not configured with this window anymore.
About your question, there are two types of URIs: seed and seeding. Seeds are describing 'locations' of peers devices acting as RendezVous or Relays (i.e., 'super' peers). This information is used directly by peers to connect RDV or relays.
Seedings are locations where peers can load information about locations of seed peers. Typically, they point towards an online xml advertisement document. First, these documents are loaded by peers. Then, the extracted content is used by peers to connect to rdv or relay peers.
You can also read the Practical JXTA II document available online at Scribd. For more details about JXTA configuration.
My bank website has a security feature that let me register the machines that are allowed to make banking transactions. If someone steals my password, he won't be able to transfer my money from his computer. Only my personal computers are allowed to make transcations from my account. So...
What are the approaches to restrict the access to a group of machines in a web system?
In other words, how to identify the computer who made the http request in the web server?
Why not using a clients certificate inside the certificate store of an authorized host or inside a cryptographic token such as smartcard that can be plugged into any desired computer?
Update: You should take into account that uniquely identifying a computer means obtaining something that is at a relative low level, unaccessable to code embeded in an html page (Javascript, not signed applet or activeX), unless you install something in the desired computer (or executing something signed such as an applet or activeX).
One thing that is unique per computer is the MAC address of the Ethernet card, that is almost ubiquitous on every rather modern (and not so modern) computer. However that couldn't be secure enough since many cards allow changing its MAC address.
Pentium III used to have an unique serial number inside CPU, that could fit perfect for your use. The downside is that no newer CPUs come with such a thing due to privacy concerns from most users.
You could also combine many elements of the computer such as CPU id (model, speed, etc.), motherboard model, hard disk space, memory installed and so on. I think Windows XP used to gather such type of information to feed a hash to uniquely identify a computer for activation purposes.
Update 2: Hard disks also come with serial numbers that can be retrieved by software. Here is an example of how to get it for activation purposes (your case). However it will work if sb takes the HD to another computer. Nonetheless you can still combine it with more unique data from computer (such as MAC address as I said before). I would also add a unique key generated for a user and kept in a database of your own would (that could be retrieved online from a server) along with the rest to feed a hash function that identifies the system.
Did you actually install something?
Over and above what Mark Brittingham mentions about IP addresses, I suppose some kind of hash code that is known only to your bank's computer and your computer(s) would work, provided you installed something. However, if you don't have a very strong password to begin with, what would stop someone from "registering" their computer to steal money from you?
I would guess your bank was doing it by using a trusted applet - my bank used to have a similar approach (honestly I thought it was a bit of a hassle - now they're using a calculator-like code generator instead). The trusted applet has access to your file system, so it can write some sort of identifier to a file on your system and retrieve this later.
A tutorial on using trusted applets.
I'm thinking about using Gears to store locally a hash-something to flag that the computer is registered.
If you are looking for the IP address of the computer that makes an account-creation request, you can easily pull that from the Request. In ASP.NET, you'd use:
string IPAddress = Request.UserHostAddress;
You could then store that with the account record and only accept logins for that account from that IP address. The problem, of course, is that this will not work for a public site at all. Most people come through an ISP that assigns IP addresses dynamically. Even with an always-on internet connection, the ISP will occasionally drop and re-open the connection, resulting in a change of IP address.
Anyway, is this what you are looking for?
Update: if you are looking to register a specific computer, have you considered using cookies? The drawback, of course, is that someone may clear their cookies and thus "unregister" their computer. The problem is, the web only has so much access to your computer (not much) so there is no fool-proof way to "register" a computer. Even if you install an ActiveX control, they could uninstall or delete it (although this is more persistent than a cookie). In the end, you'll always have to provide the end-user with some method for re-registering. And, if you do that, then you might as well have then log in anyway.
Problem:
I need to design a networking application that can handle network failures and switch to another network in case of failure.
Suppose I am using three ethernet connections and one wireless also . At a particular moment only one connection is being used.
How should I design my system so that it can switch to another network in case of failure.
I know this is very broad question but any pointers will help!
I'd typically make sure that there's routing on the network and run one (or more) routing protocol instances on the host. That way network failure is (mostly) transparent to the application, as the host OS takes care of sending packets the right way.
On the open-source side, I have good experiences with zebra and quagga, at least on linux machines.
Create a domain model for this, describing the network elements, the kind of failures you want to be able to detect and handle, and demonstrate that it works. Then plug in the network code.
have one class polling for the connection. If poll timeout fires switch the ethernet settings. For wireless, set the wifi settings to autoconnect and then just enable/disable the wificard.
(But I dont know how you switch the ethernet connection)
First thing I would do is look for APIs that will give me network disconnection events.
I'd also find a way to check the state of the network connections.
These would vary depending on the OS and the Language used so you might want to have this abstracted in your application.
Example:
RegisterDisconnectionEvent(DisconnectionHandler);
function DisconnectionHandler()
{
FindActiveNetworkConnection();
// do something else...
}
A primitive way to do it would be to look out for network disconnection events. Your sequence would be:
Register/poll for network connections status changes. Maintain a list of all active network connections.
Use the first available network connection (Alternately you could sort it based on interface bandwidths, and use the one with highest bandwidth).
When you detect a down connection, use the next active one.
However, if there are implications to the functionality of your application, based on which network connection you use, you are much better off, having either a routing protocol do the job for you, or have a tracking application within your application. This tracking application would track network paths (through various methods like ping, traceroute, etc) across all your available interfaces to see which one can reach the ultimate destination, and use the appropriate network interface.
Also, you could monitor your network interfaces for not just status changes, but also for input/output errors, and change your selection accordingly. This would help you use the most efficient network at any given point of time. But this would need to be balanced with the churn caused by switching a network connection.
If you control all of the involved hosts, Multipath TCP will probe all of your connections and automatically choose the one that works; if multiple connections are working, it will load balance across them.
If you don't control the endpoints, there's no choice but doing the probing in the application. Mosh is an example of an application that does that quite elegantly.
You didn't mention what your application does; perhaps it would be possible to redesign your protocol so that it uses all available connections simultaneously, the way BitTorrent does, and therefore doesn't care about some links being down at any given time?
What is the most reliable way to prevent users from a geographic location to access a web available application?
I understand that IPs are related to geo positioning and I also understand that the most naive way is to get the HTTP request header IP address and take it from there.
It's obvious that naive methods, like the one described are extremely easy to bypass, specially using Proxies or VPNs.
So the question is: is there a 100% reliable way of determining a web user geo location? If not, what are the available options and what are the pros and cons on each of them?
The short answer is no. There is no way to 100% lock down the people from a specific geographic location because you can't guarantee the location of a user that reliably using an IP address. Even if you could, it can be faked through redirects.
There are ways to make it more difficult for people in a region to access the site, but the more restrictive you get with those approaches the more legitimate users you are likely to lock out. For example, turning off the server would give you 100% assurance that no one from China could hit it, but it would also give you 100% assurance that no one in the US could either.
Nothing in TCP/IP includes location data (other than what you can infer from routing tables or look up in a database), and nothing indicates whether a machine is acting "on behalf of" someone in another location.
So as you say, proxies and VPN, SSH port-forwarding, TOR, etc, can completely prevent your web app from knowing the physical location of the human being who's using your site. All you can look up, is the IP address of that last hop which is the TCP/IP connection and HTTP request you actually see.
The above techniques won't work if anyone is trying to hide their location from you by redirecting through relays in other countries.
I found this script to be an easy way to implement this:
https://www.blocked.com/
Country blocking is included in the free version, as is blocking of open proxy servers, anonymity networks, etc.
There is a database somewhere on the tubes named IP 2 Country which can tell where an IP is from.
It is of course not perfect but it can give you the country where the ip comes from.
There is also a method called SSN which is related to ip addresses. I don't know how it works however, and seems to be rather complicated. It is comonly used in ads to send you localised spam. For example if you live in Montreal, Canada, then the ad will display "Find singles from Montreal!". The ISP behind the person does have to support this service.
first - figure out what ip groups are assigned to the region then you could check with every request for the user's ip address. If it matches part of the region you want to block then send them to disney.com.
See if this helps you: IP Address Info
No, there's no fool-proof way of doing this.
There's plenty of related work going on at the IETF in the GeoPriv working group, where protocols are being designed (e.g. HELD) to allow entities to ask the network their own location, and also allow other authorised entities to request that information.
However the VPN issue still causes problems, to the extent that clients with VPN capability need to request their location information before the VPN is established.