caffe could not open or find file - deep-learning

I'm new to caffe and after successfully running an example I'm trying to use my own data. However, when trying to either write my data into the lmdb data format or directly trying to use the solver, in both cases I get the error:
E0201 14:26:00.450629 13235 io.cpp:80] Could not open or find file ~/Documents/ChessgameCNN/input/train/731_1.bmp 731
The path is right, but it's weird that the label 731 is part of this error message. That implies that it's reading it as part of the path instead of as a label. The text file looks like this:
~/Documents/ChessgameCNN/input/train/731_1.bmp 731
Is it because the labels are too high? Or maybe because the labels don't start with 0? I've searched for this error and all I found were examples with relatively few labels, about ~1-5, but I have about 4096 classes of which I don't always actually have examples in the training data. Maybe this is a problem, too (certainly for learning, at least, but I didn't expect for it to give me an actual error message). Usually, the label does not seem to be part of this error message.
For the creation of the lmdb file, I use the create_imagenet.sh from the caffe examples. For solving, I use:
~/caffe/build/tools/caffe train --solver ~/Documents/ChessgameCNN/caffe_models/caffe_model_1/solver_1.prototxt 2>&1 | tee ~/Documents/ChessgameCNN/caffe_models/caffe_model_1/model_1_train.log
I tried different image data types, too: PNG, JPEG and BMP. So this isn't the culprit, either.
If it is really because of my choice of labels, what would be a viable workaround for this problem?
Thanks a lot for your help!

I had the same issue. Check that lines in your text file don't have spaces in the end.

I was facing a similar problem with convert_imageset. I have solved just removing the trailing spaces in the text file which contains the labels.

Related

Unreadable Notebook NotJSONError('Notebook does not appear to be JSON: u\'{\\n "cells": [\\n {\\n "cell_type": "...',)

Getting this very strange error when I am trying to load my ipython notebook. Never had it before, and I cannot to my recollection, remember having done anything silly with ipython:
Unreadable Notebook: /path/to/notebooks/results.ipynb NotJSONError('Notebook does not appear to be JSON: u\'{\\n "cells": [\\n {\\n "cell_type": "...',)
which is followed by
400 GET /api/contents/results.ipynb?type=notebook&_=1440010858974 (127.0.0.1) 36.17ms referer=http://localhost:8888/notebooks/results.ipynb
Save yourself a headache. Open your .ipynb in any online JSON validator and it will tell you which lines have issues. I used this one.
In my case, I am using GitHub to save and share my ipython files with my teammate. When there is a conflict in the code, I had to delete those lines indicating the changes in the conflicting code such as:
>>>>>>>>head
=============
and It works for me.
This happened to me as well. I opened my data.ipynb file using notepad and found out it was blank.
I managed to recover my file by going into the hidden ipynb_checkpoints folder and copying data_checkpoint.ipynb out into my working directory.
In my Mac OS terminal
cd .ipynb_checkpoints
cp data-checkpoint.ipynb \..
Thankfully the codes were preserved. Hope this helps!
I just had the same issue after upgrading from IPython 0.13 (ish) to Jupyter 4.
The problem in my case were a few rogue trailing commas in the JSON, for example the comma following "outputs" in:
...
"language": "python",
"metadata": {},
"outputs": [],
},
After removing the commas, Jupyter/IPython could again read the notebook (and upgraded it to version 4). I hope this helps.
The easiest way to recover corrupted Jupyter notebook files, whether it contains text or not (size = 0KB), is to go to the project folder and display the hidden files. Once the hidden files are displayed, you will see a folder named '.ipynb_checkpoints'. Simply open this folder and take the file you want!
Visual studio code procedure
This is my procedure that usually avoids me groping in the dark.
I installed a json parser validator like this one.
Open the file and save a copy as .json file.
Open the json and look for the errors.
Save it back to the .ipynb extension.
Usually, I manage to fix the errors quickly.
Jupyter autosaves in a specific way. It means You have accidentally closed the notebook before properly saving it.
You need to look for three things -
Search for <<<<<<< and delete those lines.
Search for ====== and replace those lines with ,.
Search for >>>>>>> and delete those lines.
It will work fine after this.
this can be changed to reformat your ipynb file to readable in jupyter notebook. check your other ipynb files(open in notepad) which are working fine with your jupyter notebook, check and compare at the end of the files in notepad. there you can reformat the file which is not working.
I had this issue from accidentally saving as .txt from github and solved by deleting .txt (leaving .ipynb instead of .ipynb.txt when downloading)
Yes, the best solution for me was I saved my notebook in HTML format, then opened it in Notepad ++ , delete the long repeated lines of output which were causing my notebook to grow to 45MB, once that cleared, Saved the file back into (.ipynb) format , and was able to opened it with no JSON error.
Hope that worked for others as well!
Got this error after conflicts while pushing my code to Github. The code present on the repo was old, and my changes were stashed. Notebook wasn't opening in either Jupyter and github repo. Following above comments, I searched for the part in my code which was giving JSON error,i.e. '<<<<<<<<<<<', '=======' and '>>>>>>>>>>' characters using an online json parser. Then I opened my .ipynb notebook in notepad++ and manually replaced these characters with blank string ''. After this, the notebook opened on my local Jupyter, and I also pushed the changes to Github.
I have changed by ipynb file encoding from UTF-8-BOM to UTF-8, and then it worked.
My native language is not English, but because this problem helped me a part, I came to feedback my solution.
The following is translated with translation software:
Fundamentally, the file format is messed up due to wrong closing. When opening, the correctness of the json format will be checked first, and an error will be returned if it is found to be wrong.
The mess in my file format is not <<<<< or ====== but the lack of commas.
Either way, it's best to use a piece of software to detect errors in the json syntax, and then manually fix it yourself.
The json website detection provided by the highest praise is available, but the detection errors are not complete, and may need to be detected-modified-detected-modified.
Also use vscode to open the file, vscode will prompt the location of the json syntax error, which is also incomplete and needs to be checked and modified multiple times.
The error location provided is more difficult to find. I use nodepad++, and the lower right corner can display how many characters are selected (standard, including line breaks). Then select from the first character until the destination position.
Although it's a bit stupid, the main reason is that I didn't find the relevant positioning method.
Clear all outputs.
Then copy the notebook.
If you use Jupyter-Notebook in VS code, just save it in VS code, close the file and try to open it again by accessing the browser.
on ubuntu 20.04, I have file String.ipynb. I had same problem because I coded ơ [ echo 'hello' >> String.ipynb ]. deleting 'hello' in String.ipynb -> I could open my notebook like normal.
how did I delete? [ nano String.ipynb ] * move to last line (hello) * -> delete it.
I hope my answer help you :D
you will see this error may be because, you were getting merge conflict in .ipynb file. because of that git adds >>>>>>>> HEAD thing in .ipynb file which makes is unreadable.
To overcome this issue open .ipynb file in vim editor and then remove the incoming changes or your changes as per your use case.
vim <your-.ipynb-file-path>
To remove incoming changes remove content between these lines<<<<<<<<<< HEAD ==============. Note:- remove this line as well >>>>>>>>>>>> this line.
to remove your changes remove content between these lines ============== >>>>>>>>>>>. Note:- remove this line as well a <<<<<<<<<< HEAD
I had the same issue after git merge while using VS Code and Jupyter extension.
VS Code would not open the notebook after the merge conflicts were highlighted in the notebook JSON by git (e.g. <<<<<). One way around it was to highlight the changes and accept one by one using the file viewer in the VSCode git interface.
Alternative that worked for me was to rename the file to .json so that it would open and then search for each instance of <<<<< and accept the incoming change.

Warning: last line of file ends without newline..... However it does?

I'm working on a tricky warning and, I just can't seem to get to the bottom of it. Here's what's going on:
I created a header file to define the register addresses for one of my sensor devices. My program works great however, upon compiling the project, I get the warning "device\device_reg.h(44): warning: #1-D: last line of file ends without a newline"
However, when I go to the file, it does appear to end in a new line. I remember that some text editors sometimes don't appropriately handle enter the newline. So, I deleted the newline, hit the return at the end of the new line, and recompiled. The warning persisted. I repeated this process with notepad++ and the original notepad in windows. Same results...
I am currently compiling c++ code in Keil 5.1.0.0 with an Arm compiler version 5.03.0.76
Any help would be greatly appreciated.
Thank you,
Beau
I had a similar problem with an included file. It turned out that the last line was indented. So when I hit the enter key the next line was indented leaving spaces on the new line. I hit back space to return the last row to the 0 position and the warning went away.
So, it wasn't the solution that I wanted. However, I resolved the issue. I combined the register map definitions with the class header file. I still don't know why I was getting this problem earlier and if anyone has any insights that would be great. However, as for now, this problem is solved.
Thank you.

Tesseract: Specifying regions of text

I'm using tesseract-ocr-3.01 to scan many forms. The forms all follow a template, so I already know where the regions/rectangles of text are.
Is there a way to pass those regions to tesseract when using the command-line tool?
I found the answer, thanks to this thread.
It seems that tesseract suports the uzn format (used in the unvl tests).
From the thread:
Calling tesseract with parameter "-psm 4" and renaming the uzn file
with the same name of the image seem works.
Example: If we have C:\input.tif and C:\input.uzn, we do this:
tesseract -psm 4 C:\input.tif C:\output
This may not be an optimal answer, but here goes:
I'm not sure whether the command-line tool has options to specify text-regions.
What you can do is use a Tesseract wrapper on another platform (EmguCV has Tesseract built-in). So you get the the scanned image, crop out the text-regions, and give them to Tesseract one-at-a-time. This way you'll also avoid any inaccuracies in Tesseract's page-layout analysis.
eg.
Image<Gray,Byte> scannedImage = new Image<Gray,Byte>(path_to_scanned_image);
//assuming you know a text region
Image<Gray,Byte> textRegion = new Image(100,20);
scannedImage.ROI = new Rectangle(0,0,100,20);
scannedImage.copyTo(textRegion);
ocr.recognize(textRegion);

igraph for python

I'm thoroughly confused about how to read/write into igraph's Python module. What I'm trying right now is:
g = igraph.read("football.gml")
g.write_svg("football.svg", g.layout_circle() )
I have a football.gml file, and this code runs and writes a file called football.svg. But when I try to open it using InkScape, I get an error message saying the file cannot be loaded. Is this the correct way to write the code? What could be going wrong?
The write_svg function is sort of deprecated; it was meant only as a quick hack to allow SVG exports from igraph even if you don't have the Cairo module for Python. It has not been maintained for a while so it could be the case that you hit a bug.
If you have the Cairo module for Python (on most Linux systems, you can simply install it from an appropriate package), you can simply do this:
igraph.plot(g, "football.svg", layout="circle")
This would use Cairo's SVG renderer, which is likely to generate the correct result. If you cannot install the Cairo module for Python for some reason, please file a bug report on https://bugs.launchpad.net/igraph so we can look into this.
(Even better, please file a bug report even if you managed to make it work using igraph.plot).
Couple years late, but maybe this will be helpful to somebody.
The write_svg function seems not to escape ampersands correctly. Texas A&M has an ampersand in its label -- InkScape is probably confused because it sees & rather than &. Just open football.svg in a text editor to fix that, and you should be golden!

how to unpack a dll file which is UPX packed but also the headers are changed?

I have a file which is UPX packed. Is there any way I can change the headers and still find it as UPX packed? And how do I unpack it ?
I tried a lot of tutorials and I am fed up as all explain the same method which doesnt work for me.
the same problem is mentioned in the following :
http://www.reteam.org/board/showthread.php?t=2670
I am not a well versed reverse engg.. :( jst a noob .. any ideas will be really helpful.
To find the packing, I use PEID, protectionID etc.
For correcting the headers, you need to open up the file in a hexeditor and fix the offsets in the binary manually. Then you could use the upx.exe file to decrypt as
upx -d