I'm tring to get the digit from this LCD display:
LCD Display
I used pytesseract whit this code:
img = cv2.imread('1.png')
img = get_grayscale(img)
img = cv2.bitwise_not(img)
custom_config = r'--psm 7 --oem 3 -c tessedit_char_whitelist=0123456789'
print(pytesseract.image_to_string(img,lang='eng',config=custom_config)
But i didn't get a good result.
I also tried passing the cropped image:
Cropped LCD Display
I get better result but not good enough.
I tried also to insall other OCR (lie calamari-ocr, easyocr) module but i always get some different error while tring to install that.
What i can try?
Related
We've signed up to the Pro plan and now we need to create a report using Map Image REST API to generate heatmaps using multiple colors (more than 4 colors).
I saw on the documentation that there is a limit of 4 levels and colors, I'm wondering if it's possible to use more colors in order to reach our requirements.
Do you have plans to increase the limits or beta version that doesn't have those limits?
For instance, we need to create 6 areas each one with different colors and 6 levels on the same map as shown on the following image, I should be able to use 6 different colors but only shows up 4 colors.
Map image example with 6 areas
Here is the request
GET https://image.maps.ls.hereapi.com/mia/1.6/heat
?apiKey={{API_KEY}}
# Area 1 - Yellow
&a0=49.27,-123.48
&rad0=1900
&l0=0
# Area 2 - Red
&a1=49.25,-123.38
&rad1=1500
&l1=1
# Area 3 - Blue
&a2=49.18,-123.342144
&rad2=1500
&l2=2
# Area 4 - Green
&a3=49.28,-123.35
&rad3=1000
&l3=3
# Area 5 - Orange
&a4=49.21,-123.55
&rad4=1800
&l4=4
# Area 6 - White
&a5=49.30,-123.60
&rad5=1000
&l5=5
#
&z=11
&w=900
&h=900
&plt=FCFF00,EB2501,001EFF,1FE80C,FF8C0D,FFFFFF
Thanks!
I can't speak to our plans for this API, I can, however, raise a ticket internally asking that this be considered. My guess is that it's for performance as well as "length of URL" concerns, but at minimum I can ask.
I recently started working in IRAF as I have the need for image data reduction.
I tried to stack .fit images using imalign function, but I get this error message:
This was a test, so I have only 4 images in input and output lists, and I have 4 shifts in shiftlist.txt. These are my files - input list:
NGC7286-0001_B.fit
NGC7286-0003_B.fit
NGC7286-0004_B.fit
NGC7286-0005_B.fit
Output list:
sh-NGC7286-0001_B.fit
sh-NGC7286-0003_B.fit
sh-NGC7286-0004_B.fit
sh-NGC7286-0005_B.fit
Shiftlist:
0.0 0.0
3.751 4.55
3.997 9.273
3.107 15.243
List of coordinates of referent stars:
618.58 666.96
1136.19 711.39
1288.88 942.79
1417.72 927.84
1004.71 1517.73
1053.39 1756.91
532.16 1794.60
Why do I get this error message? Do you see anything wrong with my files?
If I use shiftlist I calculated, do I need to change bigbox (20) and/or boxsize (7)? Thank you in advance.
Found the solution, though I don't know why my IRAF have that problem.
I can't have "_" character in the names of my images.
Strange, but now aligning works I guess.
I've installed tesseract on my linux environment.
It works when I execute something like
# tesseract myPic.jpg /output
But my pic has some little labels and tesseract didn't see them.
Is an option is available to set a pitch or something like that ?
Example of text labels:
With this pic, tesseract doesn't recognize any value...
But with this pic:
I have the following output:
J8
J7A-J7B P7 \
2
40 50 0 180 190
200
P1 P2 7
110 110
\ l
For example, in this case, the 90 (on top left) is not seen by tesseract...
I think it's just an option to define or somethink like that, no ?
Thx
In order to get accurate results from Tesseract (as well as any OCR engine) you will need to follow some guidelines as can be seen in my answer on this post:
Junk results when using Tesseract OCR and tess-two
Here is the gist of it:
Use a high resolution image (if needed) 300 DPI is minimum
Make sure there is no shadows or bends in the image
If there is any skew, you will need to fix the image in code prior to ocr
Use a dictionary to help get good results
Adjust the text size (12 pt font is ideal)
Binarize the image and use image processing algorithms to remove noise
It is also recommended to spend some time training the OCR engine to receive better results as seen in this link: Training Tesseract
I took the 2 images that you shared and ran some image processing on them using the LEADTOOLS SDK (disclaimer: I am an employee of this company) and was able to get better results than you were getting with the processed images, but since the original images aren't the greatest - it still was not 100%. Here is the code I used to try and fix the images:
//initialize the codecs class
using (RasterCodecs codecs = new RasterCodecs())
{
//load the file
using (RasterImage img = codecs.Load(filename))
{
//Run the image processing sequence starting by resizing the image
double newWidth = (img.Width / (double)img.XResolution) * 300;
double newHeight = (img.Height / (double)img.YResolution) * 300;
SizeCommand sizeCommand = new SizeCommand((int)newWidth, (int)newHeight, RasterSizeFlags.Resample);
sizeCommand.Run(img);
//binarize the image
AutoBinarizeCommand autoBinarize = new AutoBinarizeCommand();
autoBinarize.Run(img);
//change it to 1BPP
ColorResolutionCommand colorResolution = new ColorResolutionCommand();
colorResolution.BitsPerPixel = 1;
colorResolution.Run(img);
//save the image as PNG
codecs.Save(img, outputFile, RasterImageFormat.Png, 0);
}
}
Here are the output images from this process:
I am using the Tesseract library to extract text from images. The language is Vietnamese. I have two images. The first one is from a website. The second is a screenshot taken from the Wordpad program. They are shown in links below:
1
2
The first one has 95% accuracy.
Bán căn hộ tầng 5 khu tập thể Thành công Bắc, DT 28m2, gần chợ ThànhCông,
số
đỏ, chính chủ, giá 800 triệu.LH:A.Châu, 0979622551,0905685336
The second image is much larger but the accuracy is just about 60%.
Bặn căn hộ tầng ậ khu tập thể Ỉhành gông
Băc. llĩ 28 m2. gân chợ ĩllành Bông. sũ Ilỏ.
chính l:lIlì. giá 800 lriệu. l.ll: A.BhâU,
0979622551, 0905685336
What about the second image do I have to fix to get as accurate text as the first one?
As stated by #user898678 in image processing to improve tesseract OCR accuracy ,
the following operations can improve OCR's accuracy :
fix DPI (if needed) 300 DPI is minimum
fix text size (e.g. 12 pt should be ok)
try to fix text lines (deskew and dewarp text)
try to fix illumination of image (e.g. no dark part of image
binarize and de-noise image
I want to render an ERDAS-Image-file (suffix .img) with the UMN-Mapserver. The data is rendered on the right position and with the correct shape, but all data is white instead of an raster-image. The Image contains many layers. My mapfile looks like this:
MAP
NAME "Test"
WEB
METADATA
"wms_title" "test"
"WMS_SRS" "epsg:31466 epsg:31467 epsg:31468 epsg:31469 epsg:4326 epsg:25832 epsg:3035"
END
LOG "test.log"
IMAGEPATH "."
END
SHAPEPATH "."
PROJECTION
"init=epsg:32632"
END
LAYER
NAME "testlayer"
TYPE RASTER
DATA "test.img"
STATUS ON
OFFSITE 0 0 0
END
OUTPUTFORMAT
NAME png
DRIVER "GD/PNG"
MIMETYPE "image/png"
IMAGEMODE RGBA
END
END
To give an answer to my own question: The input-file had 16 Bit per channel and that didn't worked out. The mapserver can scale the colors, but you need the data from the people, that have knowledge about the image. In my case, I was said to scale from 0-22000, so I wrote the following line to the layer-definition:
PROCESSING "SCALE=0,22000"
That worked well, now I can see structure in the image. If you don't know about the correct scale, you could try the following
PROCESSING "SCALE=AUTO"
I hope this helps someone, who runs into the same trouble in the future.