I recently dicovered Gatsby and I want to use this template for my own website:
https://github.com/toboko/gatsby-starter-fine
When downloading it, manage to run it http://localhost:8000/ but I get this error which I can escape:
TypeError: strings.slice(...).reduce is not a function
I added my repository here so you can take a look too: https://github.com/melariza/gatsby-starter-fine
Could you take a look and help fix it?
Screenshot of the error:
enter image description here
Here's the error text:
TypeError: strings.slice(...).reduce is not a function
css
/Users/mga/Sites/gatsby-starter-fine/.cache/loading-indicator/style.js:5
2 |
3 | function css(strings, ...keys) {
4 | const lastIndex = strings.length - 1
> 5 | return (
6 | strings.slice(0, lastIndex).reduce((p, s, i) => p + s + keys[i], ``) +
7 | strings[lastIndex]
8 | )
View compiled
Style
/Users/mga/Sites/gatsby-starter-fine/.cache/loading-indicator/style.js:14
11 | const Style = () => (
12 | <style
13 | dangerouslySetInnerHTML={{
> 14 | __html: css`
15 | :host {
16 | --purple-60: #663399;
17 | --gatsby: var(--purple-60);
View compiled
▶ 18 stack frames were collapsed.
(anonymous function)
/Users/mga/Sites/gatsby-starter-fine/.cache/app.js:165
162 | dismissLoadingIndicator()
163 | }
164 |
> 165 | renderer(<Root />, rootElement, () => {
166 | apiRunner(`onInitialClientRender`)
167 |
168 | // Render query on demand overlay
View compiled
I guess the problem is related to Node and its dependencies. The repository is not an official Gatsby starter and the last commit dates from 3 years ago. Gatsby is now on version 4.14 while the starter is on ^2.0.50. Two major versions happened during the last 3 years only in Gatsby so imagine the rest of the dependencies.
The starter doesn't contain a .nvmrc file or engine property in the package.json so the Node version that runs that project is unknown. Be aware that if you clone or fork that project, you will have a lot of deprecated dependencies and you'll have several migrations to do (from v2 to v3 and from v3 to v4).
So my advice is to reject that repository and use one of the officials. If that's not an option, try playing around with the version of Node, starting from 12 onwards, reinstalling the node_modules each time you upgrade or downgrade the version.
In short
I need to load several JSON templates to ElasticSearch within filebeat.yaml configuration
I have
Directory with templates:
-rootdir
|
| - templates
|
|- some-template.json
|- some-2-template.json
|- some-3-template.json
Pre-setup properties in filebeat.yaml configuration, like:
setup.template:
json:
enabled: true
path: /rootdir/templates
pattern: "*-template.json"
name: "json-templates"
This is actually a blueprint as I do not know how to load to ElasticSearch all templates, because one template using this config loaded successfully, if I append to path, for example, /some-template.json.
After the starting the Filebeat I've got next error logs:
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://:9200)): Connection marked as failed because the onConnect callback failed: error loading template: error reading file /rootdir/templates for template: read /rootdir/templates: is a directory
Question is
How I can upload multiple files within one property with different index-patterns in each template, so the results after running GET _cat/templates?v=true should be like this:
name index_patterns order version composed_of
some-template [some*] 0 7140099
some-2-template [some-2*] 0 7140099
some-3-template [some-3*] 0 7140099
.monitoring-es [.monitoring-es-7-*] 0 7140099
.monitoring-alerts-7 [.monitoring-alerts-7] 0 7140099
.monitoring-logstash [.monitoring-logstash-7-*] 0 7140099
.monitoring-kibana [.monitoring-kibana-7-*] 0 7140099
.monitoring-beats [.monitoring-beats-7-*] 0 7140099
ilm-history [ilm-history-5*] 2147483647 5 []
.triggered_watches [.triggered_watches*] 2147483647 12 []
.kibana-event-log-7.16.3-template [.kibana-event-log-7.16.3-*] 0 []
.slm-history [.slm-history-5*] 2147483647 5 []
synthetics [synthetics-*-*] 100 1 [synthetics-mappings, data-streams-mappings, synthetics-settings]
metrics [metrics-*-*] 100 1 [metrics-mappings, data-streams-mappings, metrics-settings]
.watch-history-12 [.watcher-history-12*] 2147483647 12 []
.deprecation-indexing-template [.logs-deprecation.*] 1000 1 [.deprecation-indexing-mappings, .deprecation-indexing-settings]
.watches [.watches*] 2147483647 12 []
logs [logs-*-*] 100 1 [logs-mappings, data-streams-mappings, logs-settings]
.watch-history-13 [.watcher-history-13*] 2147483647 13 []
Additionally
I'm running Filebeat and ElasticSearch in Docker using Docker compose, may be it would be helpful somehow
Thank you in advance!
Best Regards, Anton.
I am trying to train a model using Yolo V5.
I have the issue of Data base not found.
I have a train, test and valid files that contain all the image and labels files.
I have tested the files on googlecolap and it dose work. However, on my local machine it shows the issue of Exception: Dataset not found.
(Yolo_5) D:\\YOLO_V_5\Yolo_V5\yolov5>python train.py --img 416 --batch 8 --epochs 100 --data /data.yaml --cfg models/yolov5s.yaml --weights '' --name yolov5s_results --cache
Using torch 1.7.0 CUDA:0 (GeForce GTX 1080, 8192MB)
Namespace(adam=False, batch_size=8, bucket='', cache_images=True, cfg='models/yolov5s.yaml', data='.\\data.yaml', device='', epochs=100, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[416, 416], local_rank=-1, log_imgs=16, multi_scale=False, name='yolov5s_results', noautoanchor=False, nosave=False, notest=False, project='runs/train', rect=False, resume=False, save_dir='runs\\train\\yolov5s_results55', single_cls=False, sync_bn=False, total_batch_size=8, weights="''", workers=16, world_size=1)
Start Tensorboard with "tensorboard --logdir runs/train", view at http://localhost:6006/
Hyperparameters {'lr0': 0.01, 'lrf': 0.2, 'momentum': 0.937, 'weight_decay': 0.0005, 'warmup_epochs': 3.0, 'warmup_momentum': 0.8, 'warmup_bias_lr': 0.1, 'box': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'anchors': 3, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.1, 'scale': 0.5, 'shear': 0.0, 'perspective': 0.0, 'flipud': 0.0, 'fliplr': 0.5, 'mosaic': 1.0, 'mixup': 0.0}
WARNING: Dataset not found, nonexistent paths: ['D:\\me1eye\\Yolo_V5\\valid\\images']
Traceback (most recent call last):
File "train.py", line 501, in <module>
train(hyp, opt, device, tb_writer, wandb)
File "train.py", line 78, in train
check_dataset(data_dict) # check
File "D:\me1eye\YOLO_V_5\Yolo_V5\yolov5\utils\general.py", line 92, in check_dataset
raise Exception('Dataset not found.')
Exception: Dataset not found.
Internal process exited
(Olive_Yolo_5) D:\me1eye\YOLO_V_5\Yolo_V5\yolov5>
there is a much simpler solution. Just go into data.yaml wherever you saved it and change the relative paths to absolut - i.e. just write the whole path! e.g.
train: C:\hazlab\BCCD\train\images
val: C:\hazlab\BCCD\valid\images
nc: 3
names: ['Platelets', 'RBC', 'WBC']
job done - note, as you are in Windows, there is a known issue in the invocation of tain.py - do not use quotes on the file names in the CLI e.g.
!python train.py --img 416 --batch 16 --epochs 100 --data C:\hazlab\BCCD\data.yaml --cfg ./models/custom_yolov5s.yaml --weights '' --name yolov5s_results --cache
Well! I have also encountered this problem and now I fix it.
All you have to do is to keep train, test, validation (these three folders containing images and labels), and yolov5 folder (that is cloned from GitHub) in the same directory. Also, another thing is that the 'data.yaml' file has to be inside the yolov5 folder.
Command to train the model would be like this:
!python train.py --img 416 --batch 16 --epochs 10 --data ./data.yaml --cfg ./models/yolov5m.yaml --weights '' --name yolov5m_results
The issue is due to not found actual dataset path. I found same issue when i trained the Yolov5 model on custom dataset using google colab, I did the following to resolve this.
Make sure provide correct path of data.yaml of dataset.
Make sure path of dataset in data.yaml should be be corrected.
train, test, and valid key should contain path with respect to the main path of the dataset.
Example data.yaml file given below.
path: /content/drive/MyDrive/car-detection-dataset
train: train/images
val: valid/images
test: test/images
nc: 1
names: ['car']
Is there a way to analyze the contents of a specific index (fdb file)? I know I can see the index creation statement and try to guess from there but it would be nice if there is a way to see the contents/records inside an fdb file.
two tools cbindex and forestdb_dump can help. These are available in the bin folder along with other couchbase binaries. Note that, these tools are not supported, as documented at http://developer.couchbase.com/documentation/server/4.5/release-notes/relnotes-40-ga.html
given bucket/indexname, tool cbindex gets index level details:
couchbases-MacBook-Pro:bin varakurprasad$ pwd
/Users/varakurprasad/Downloads/couchbase-server-enterprise_451_GA/Couchbase Server.app/Contents/Resources/couchbase-core/bin
couchbases-MacBook-Pro:bin varakurprasad$ ./cbindex -server 127.0.0.1:8091 -type scanAll -bucket travel-sample -limit 4 -index def_type -auth Administrator:couch1
ScanAll index:
[airline] ... airline_10
[airline] ... airline_10123
[airline] ... airline_10226
[airline] ... airline_10642
Total number of entries: 4
Given a forestdb file, the tool forestdb_dump gets more low level details:
couchbases-MacBook-Pro:varakurprasad$ pwd
/Users/varakurprasad/Library/Application Support/Couchbase/var/lib/couchbase/data/#2i/travel-sample_def_type_1018858748122363634_0.index
couchbases-MacBook-Pro:varakurprasad$ forestdb_dump data.fdb.53 | more
[FDB INFO] Forestdb opened database file data.fdb.53
DB header info:
BID: 1568 (0x620, byte offset: 6422528)
DB header length: 237 bytes
DB header revision number: 3
...
Doc ID: airline_10
KV store name: back
Sequence number: 14637
Byte offset: 2063122
Indexed by the main index
Length: 10 (key), 0 (metadata), 24 (body)
Status: normal
Metadata: (null)
Body:^Fairline
...
I have a script that gets data from multiple sources and I want to format its output to HTML table format.
Edited:
The format at the moment:
[Environment Name]
[Back end version]
[DB Version]
[event1 status] [event2 status] [event schema] [nodes] [node_no] [vpool] [ver] [node_ip]
The list at the moment:
grid-dev
BE version: 6.0
Database version: 10
DISABLED DISABLED dev_1 3 01 1 10.0.19-MariaDB 10.101.666.11:3306
grid-test
BE version: 7.0
Database version: 11
ENABLED ENABLED test_1 2 02 4 10.0.17-MariaDB 10.108.777.14:3306
grid-test
BE version: 7.0
Database version: 11
SLAVESIDE_DISABLE SLAVESIDE_DISABLE test_2 1 02 3 10.0.17-MariaDB 10.108.777.47:3306
grid-staging
BE version: 6.0
Database version: 10
DISABLED DISABLED staging_1 2 02 4 10.0.18-MariaDB 10.109.888.22:3306
and I want to format it to HTML table in something like this
ENVIRONMENT BACKEND_VERSION DB_VERSION EVENT1 EVENT2 SCHEMA NODES NODE_NO VPOOL VERSION IP
----------------------------------------------------------------------------------------------------------------------------------------------------------
grid-dev 6 10 DISABLED DISABLED dev_1 3 01 1 10.0.19-MariaDB 10.101.666.11:3306
grid-test 7 11 ENABLED ENABLED test_1 2 02 4 10.0.17-MariaDB 10.108.777.14:3306
grid-test 7 11 SLAVES... SLAVESI... test_2 2 01 3 10.0.17-MariaDB 10.108.777.47:3306
grid-staging 6 10 DISABLED DISABLED stag_1 2 02 4 10.0.18-MariaDB 10.109.888.22:3306
Is it possible to do it using bash script ? Any help will be appreciated I am new to bash and HTML so I am stuck.
My attemp using the code on the answer:
awk 'BEGIN{print "ENVIRONMENT BACKEND_VERSION DB_VERSION EVENT1 EVENT2 SCHEMA NODES NODE_NO VPOOL VERSION IP" } NF==1{env=$0; t=1; next;} t==1{t++; be=$3; next;} t==2{t++; db=$3; next;} t==3{printf "%s %s %s %s\n", env, be, db, $0; env="#";be="#";db="#";}' < "$output" | column -t | tr '#' ' ' >> "$dbstats"
The out put is
ENVIRONMENT BACKEND_VERSION DB_VERSION EVENT1 EVENT2 SCHEMA NODES NODE_NO VPOOL VERSION IP
grid-dev56.0 136 grid_dev Database version: 138
DISABLED DISABLED grid_systest 3 03 1 10.0.19-MariaDBgrid-systest56.0
Database version: 138
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_systest 3 01 1 10.0.19-MariaDBgrid-systest56.0
Database version: 138
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_systest 3 02 1 10.0.19-MariaDBgrid-staging56.0
Database version: 136
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_staging 3 03 1 10.0.19-MariaDBgrid-staging56.0
Database version: 136
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_staging 3 02 1 10.0.19-MariaDBgrid-staging56.0
Database version: 136
ENABLED ENABLED grid_staging 3 01 1 10.0.19-MariaDBgrid-production56.0
Database version: 136
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_production 3 03 1 10.0.19-MariaDBgrid-production56.0
Database version: 136
SLAVESIDE_DISABLED SLAVESIDE_DISABLED grid_production 3 02 1 10.0.19-MariaDBgrid-production56.0
Database version: 136
DISABLED SLAVESIDE_DISABLED grid_production 3 01 1 10.0.19-MariaDB
Thanks
$ awk 'BEGIN{print "Envirnoment BackEndVersion DBVersion EventName Status Schema" } NF==1{env=$0; t=1; next;} t==1{t++; be=$3; next;} t==2{t++; db=$3; next;} t==3{printf "%s %s %s %s\n", env, be, db, $0; env="#";be="#";db="#";}' <input_file | column -t | tr '#' ' '
Envirnoment BackEndVersion DBVersion EventName Status Schema
grid-dev 6.0 10 swap DISABLED dev_1
busy DISABLED dev_1
grid-test 7.0 11 swap ENABLED test_1
busy ENABLED test_1
grid-staging 6.0 10 swap DISABLED staging_1
busy DISABLED staging_1
grid-production 5.0 9 swap ENABLED prod
busy ENABLES prod
After you edit your question with your attempts, Please comment on this answer, so that I will add explanation.
With the format above is possible to get into a HTML format using:
awk -v header=1 'BEGIN{OFS="\t"; print "<html><body><table>" }
{
gsub(/</, "\\<")
gsub(/>/, "\\>")
gsub(/&/, "\\>")
print "\t<tr>"
for(f = 1; f <=NF; f++) {
if(NR == 1 && header) {
printf "\t\t<th>%s</th>\n", $f
}
else printf "\t\t<td>%s</td>\n", $f
}
print "\t</tr>"
}
END {
print "</table></body></html>"
}' "$FORMATED_TABLE" )
This could be useful for someone looking to convert into HTML.
I know it's a late answer to this question, but will help those googling for a solution, for converting bash command output to html table format. There is an easy script available to do this at : https://sourceforge.net/projects/command-output-to-html-table/ which can be used to convert any command output or file to a nice html table format. You can specify the delimiter to this script, including special ones like tabs, newlines etc. and get the output in html table format with a html search at the top.
Just download the script, extract it and issue the following command :
cat test.txt | { cat ; echo ; } | ./tabulate.sh -d " " -t "My Report" -h "My Report" > test.html
This assumes that fields are separated by a space character, as specified by the other solution : https://stackoverflow.com/a/31245048/16923394
If the delimiter is a tab character, then change -d " " to -d $'\t' above.
The output file generated is attached here: https://sourceforge.net/projects/my-project-files/files/test.html/download