I'm new to tesseract. I tried to start training the tesseract but while creating shapetable file in command prompt, it was throwing some error like 'Failed to load font_properties from font_properties'. Did anyone faced the same issue before? Can you let me know how to fix this pls.
Code : shapeclustering -F font_properties -U unicharset -O block.unicharset block.font.exp0.tr
Error : Failed to load font_properties from font_properties
Does font_properties exists? What is in it?
BTW: tesseract training is waste of time for new tesseract users.
Related
I started learning how to code two months ago, so everything is very new to me. Currently I'm trying to learn how to use logstash from the elastic website (learning how to move from mysql to elasticsearch using logstash). I've got some problems and I don't know how to solve this issue:
I tried to follow the instructions from the link:
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
and
https://www.elastic.co/guide/en/beats/libbeat/6.4/config-file-permissions.html
After I tried:
sudo ./filebeat -e -c filebeat.yml -d "publish"
I've got an error message saying:
"config file ("filebeat.yml") must be owned by the beat user (uid=0) or root"
So I tried
"chown 0 filebeat.yml" and "chown root filebeat.yml"
But it says : "chown: filebeat.yml: Operation not permitted"
How can I solve this problem?
I've also tried to use
"--strict.perms=false"
but it says "-bash: --strict.perms=false: command not found"
Can anyone please help me with this?
Try sudo -i enter your password root and run as it again
sudo ./filebeat -e --strict.perms=false
This starts filebeat with the flag "--strict.perms=false" set at start.
Could you please suggest what i am doing wrong? i cannot change the delimiter of the output file using es2csv cli tool.
es2csv -q '*' -i test_index -o test.csv -f id name -d /t
Actually this issue has been reported here: https://github.com/taraslayshchuk/es2csv/issues/51
If you don't want to wait for the fix to be released, you can change line 212 of es2csv.py like this and it will work:
csv_writer = csv.DictWriter(output_file, fieldnames=self.csv_headers, delimiter=unicode(self.opts.delimiter))
I apologize if my question does not deserve a standard to ask here.
I have two files(products-data.json, orders-data.json) inside the following directory:
G:\kb\Couchbase\CB121
and I imported the products-data.json successfully using the following command:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://products-data.json -t 4 -g %type%::%variety%::#MONO_INCR#
But when I try to import orders-data.json in the same way as follows:
G:\kb\Couchbase\CB121>cbimport.exe json -c couchbase://127.0.0.1 -u sattar -p 156271 -b sampleDB -f lines -d file://orders-data.json -t 4 -g %type%::%order_id%
I am getting the following error:
2018-01-21T12:01:31.211+06:00 [31mERRO: open orders-data.json: The system cannot find the file specified.[0m[2m -- jsondata.(*Parallelizer).Execute() at source.go:198[0m
2018-01-21T12:01:31.212+06:00 [31mERRO: open orders-data.json: The system cannot find the file specified.[0m[2m -- plan.(*data).execute() at data.go:89[0m
Json import failed: open orders-data.json: The system cannot find the file specified.
It kills my day. Any help is appreciated. Thanks.
I have a directory of roughly 45,000 json files. The total size is around 12.8 GB currently. This is website data from kissmetrics and its structure is detailed here.
The data:
Each file is multiple json documents separated by a newline
It will be updated every 12 hours with new additional files
I want to import this data to mongoDB using mongoimport. I've tried this shell script to make the process easier:
for filename in revisions/*;
do
echo $filename
mongoimport --host <HOSTNAME>:<PORT> --db <DBNAME> --collection <COLLECTIONNAME> \
--ssl --sslCAFile ~/mongodb.pem --username <USERNAME> --password <PASSWORD> \
--authenticationDatabase admin $filename
done
This will have errors
2016-06-18T00:31:10.781+0000 using 1 decoding workers
2016-06-18T00:31:10.781+0000 using 1 insert workers
2016-06-18T00:31:10.781+0000 filesize: 113 bytes
2016-06-18T00:31:10.781+0000 using fields:
2016-06-18T00:31:10.822+0000 connected to: <HOSTNAME>:<PORT>
2016-06-18T00:31:10.822+0000 ns: <DBNAME>.<COLLECTION>
2016-06-18T00:31:10.822+0000 connected to node type: standalone
2016-06-18T00:31:10.822+0000 standalone server: setting write concern w to 1
2016-06-18T00:31:10.822+0000 using write concern: w='1', j=false, fsync=false, wtimeout=0
2016-06-18T00:31:10.822+0000 standalone server: setting write concern w to 1
2016-06-18T00:31:10.822+0000 using write concern: w='1', j=false, fsync=false, wtimeout=0
2016-06-18T00:31:10.824+0000 Failed: error processing document #1: invalid character 'l' looking for beginning of value
2016-06-18T00:31:10.824+0000 imported 0 documents
I will potentially run into this error, and from my inspection is not due to malformed data.
The error may happen hours into the import.
Can I parse the error in mongoimport to retry the same document? I don't know if the error will have this same form, so I'm not sure if I can try to handle it in bash. Can I keep track of progress in bash and restart if terminated early? Any suggestions on importing large data of this size or handling the error in shell?
Typically a given command will return error codes when it fails (and the are hopefully documented on the man page for the command).
So if you want to do something hacky and just retry once,
cmd="mongoimport --foo --bar..."
$cmd
ret=$?
if [ $ret -ne 0 ]; then
echo "retrying..."
$cmd
if [ $? -ne 0 ]; then
"failed again. Sadness."
exit
fi
fi
Or if you really need what mongoimport outputs, capture it like this
results=`mongoimport --foo --bar...`
Now the variable $results will contain what was returned on stdout. Might have to redirect stderr as well.
Following the instructions in https://github.com/membase/manifest I obtain the error:
daemon/memcached.h:12:27: fatal error: cbsasl/cbsasl.h: No such file or directory
after:
$ repo init -u git://github.com/membase/manifest.git -m branch-2.1.0.xml
$ repo sync
$ make
Also in branch-2.0.1.xml
Thanks in advance!!!
Not compiling support for Ubuntu 13.10