Can not access file in ipfs - ipfs

In my ubuntu 18.04 ipfs desktop client i have uploaded a file. But can not access the file with link, which i obtained from Share link option.
Here is my shareable link.
I can not access the other file too, link.
why is this happening?

To keep a file available in the Peer2Peer IPFS system the file mus have been pinned. Remember to pin the files you want to keep available even when your computer is not Peering the file and for longer times.
I'm new to IPFS and found your Stack Overflow question when trying to figure out how to solve the same problem. In my experience I had uploaded a File in the IPFS and it had been reachable over the share link for a period of time until today when it wasn't.
So, after some research I got it working again when I pinned the file. This Medium article was a good read. In the screenshot you provided I can see that only one of the files are pinned, not the one you shared the link to. Most likely the problem will be fixed after you pinned it. This can be done through the client or Terminal commands (see the medium article to learn more about those).
Best of luck!

Related

Chmod a folder of imported images

I run a blog on which every external images posted by me or my users on a topic are directly imported to my web server (in the images folder).
This "images" folder is chmod to 755, however all the files in it (.jpg/png/jpeg/gif) are automatically chmod to 644.
In this case, if a user posts an "infected" image that contains a php script or malicious code, are they blocked by chmod 644 or is there still a chance that the code gets executed when the hacker opens the url mysite.com/images/infectedfile.png ?
Thanks for your help
Interesting question. There are two problems that I can see with a malicious file ending up in your images folder:
What does the malicious file do to your server?
What does the malicious file do to your users that download it?
I don't believe that your solution will be complete by any means. Considering what happens when a user visits a post with a malicious image on it. An unlucky user could be infected by the malicious code. A lucky user will have an anti malware product that will detect this and block the page or image from being loaded.
The lucky user is unlucky for you as this means your reputation with that user is damaged they might not come back. Worse still if the user reports your blog e.g. via Google Safe Browsing as serving malicious files then you can find yourself on a block list. This will mean virtually zero traffic to your site.
I would strongly recommend a layered approach to solving this problem. I would look for a technology that allows you to scan files as they are uploaded, or, at least confirm they are not known to be malicious. Secondly I would look at a solution that scans the files you are serving from your web server.
Understandably this becomes harder to do with modern solutions where you don't necessarily own the operating system for your web server and therefore can't deploy a traditional anti malware product. There are some options for looking up files as they pass into your environment via an API Virus Total and Sophos Intelix are a couple of examples but I am sure there are more.
You should configure your images folder in your web server software to not serve files unless they end in the exact file types you allow (e.g. .jpg, .png, .gif), and not to run the PHP process on them. You should also ensure that the correct mime type for those file types is always used, so image/jpeg for .jpg, image/png for .png, image/gif for .gif.
If those two things are configured, even if someone uploads e.g. a .jpg file that actually contains something malicious, the browser will try to load it as a JPEG image, and fail because it's not valid. Occasionally browsers have bugs that mean a specially broken image file can cause the browser to do something it shouldn't.

How to solve cache problem on modern browsers?

we are developing a VueJS based application. We have huge caching problem.
Team members are constantly updating the site but we are getting feedbacks about the solved problems such as typos and miss placed elements.
I personnaly tried the inspect this situation, I found that Chrome reads the files from disk cache or memory cache until the page is refreshed. Even though sometimes chrome still loads the old page when we are re entering the site again (after the refreshing process (ctrl + shift + r)).
I' m sorry for my bad english but I tried my best to explain what I encounter. Also I found a topic about the problem, OP has explained the what I was encounteing. You can also check that out.
How to clear cache of service worker?
I created a website on IIS (local machine windows 10), published the project and tried to reaching it with local ip adress (127.0.0.1:8093), in the network tab I can see the .js and .css files being downloaded then I restart the browser and tried again, this time files are being served from disk cache, I tried couple of times and sometimes files are served from cache and sometimes downloaded.
I tried to add serviceWorker but I got empty handed. Also I created a base project to test some vuejs features and I added same serviceWorker code to the project. It cached again.
Our servers is windows 2012 server with IIS 8.
If it is possible we want no-caching approach or we want to manage what's cached and not. If you can help we would be appreciated.
You can checkout the base project
vue-base project
What I tried
As I said above I tried to add service workers as github commit,
https://github.com/vuejs-templates/pwa/pull/21/files
Also I tried deleting the cached data caches.delete(cacheName) did not seem to work.
I don't know if the serviceworkers related to this problem but did not solve my problem. May be I could not add the code properly. If you can help I would be very appreciated.
Thank you for your helps.
Edit1: Screen GIF
I dont know what you have been using to bundle your code and assets, but with webpack it is possible to create the files with a hashcode, which means that everytime the browser finds a new file reference in your browser it will download it.
Ex: you deployed yesterday a code which contained main.34534534534.js
Today you deploy again but the file is main.94565342.js. Your browser will automatically invalidate cache.

How to change Sublime Text Packages directory location?

I would like to change the Packages directory location or add a new one (to be on network drive.)
I can't use the symlink.
I had a look on the plugins, but can't find.
I am on Windows Vista, Windows 8 (soon.)
Thank you for your help.
Christophe
If a moderator could please turn this post into a comment, because it is only a potential workaround to the issue (rather than a complete solution).
Using an OSX operating system, I am automatically backing-up (to Dropbox with a symlink) the entire SublimeText2 folder and subfolders within the application support directory. Inasmuch as Dropbox works on different platforms, perhaps this will give the original poster some ideas to have multiple computers using the same Packages data. I am successfully using this method to automatically synchronize all three (3) of my computers -- i.e., if anything changes on one computer in the Packages directory or subdirectories, then all computers are automatically synchronized.
Of course, back up your data on each computer before trying out this method.

How to partially read a CSV file with Super CSV

I have a csv file with 24 columns. Out of these I only want to read 3 columns. I see that super CSV is a very powerful library, but I can't figure out how to partially read a CSV. The link that have on partial reading is broken.
Please do help me out with a working example.
Update: SourceForge is back online! The Super CSV site should work now :)
That's the correct link, but SourceForge project websites are down right now (according to the SourceForge blog):
Starting at 12:59 UTC today, we experienced a site outage, causing
general connectivity issues sitewide. At 15:12 UTC, site connectivity
was restored and most services, including downloads, are now back
online. Some services are however are offline while we continue to
diagnose and determine the root cause for this issue. The services
still offline are:
Project web (ie., projectname.sourceforge.net pages) and associated
shell and database services. This also includes access through sftp,
scp, and rsync via ssh.
So you have a few options:
view the source code of the reading example on SourceForge (this part of SF is still working). There's a link to download the file at the top left.
check out the project source from subversion (you can view/run the reading example listed above to see how it works, or you can even run mvn site:site to generate the project website locally
view the cached page from Google
I hope you enjoy using Super CSV - if you have any other questions feel free to post them here on SO, or on the project help forum on SourceForge.

Huge file upload

I have a web service that accepts really huge files. Usually in the range of 10 - 15 GB (not MB).
However upload using a browser is only possible using Chrome on Linux. All 3 major browsers have different flaws trying to upload such a file:
Internet Explorer stops after exactly 4GB.
Firefox does not start at all.
Chrome (on Windows) transfers the whole file but fails to send the closing bondary (send 0xff instead).
Now we are searching for a way to get uploads to work. Preferably using HTML/JS only, but I see no way to make that happen. Second try would be flash, but FileReference seems to break for files > 4GB. Last way would be Java but that is not what we are looking for in the browser client.
Note that this is about the client. I know that the server side code works, as I can upload a 12GB file using standard HTML-Upload with Chrome on Linux. It is the only browser/os combination that works so far, but therefor I am sure, the server coode is fine.
Does anyone know any way to get huge file uploads to work?
Regards,
Steffen
There is a fairly young JS/HTML5 API which might cover your user case:
https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications
I can't speak to its suitability though.
If you're using IIS, the default max file upload is 4GB. You need to change this in your script or your server settings.
See: Increasing Max Upload File Size on IIS7/Win7 Pro
Normally you would break and upload such files in chunks using stream upload. If you take a limited amount data of the file, upload that part to the server, server appends data to the file. Repeat till complete file is uploaded. You can read a file in parts using FileStream (update: Adobe AIR only) or with javascript using the experimental Blob/FileReader. I guess this could be a workaround for the problem.
I think this solution could help:
AS3 fileStream appears to read the file into memory
Let me know if this works out, its an interesting problem.