I want to capture the output of a mysqlsh script (LOAD DATA):
logs=$(mysqlsh --user=$USER --password=$PASS --socket=/var/run/mysqld/mysqld.sock < /script.js)
echo $logs
script.js:
util.importTable('/my.csv', {schema: 'db', table: 'mytable', dialect: 'csv-unix',
fieldsTerminatedBy: ';', linesTerminatedBy: '\n', replaceDuplicates: true
showProgress: false, bytesPerChunk: '200M', threads: 3, columns: [...]});
On the console I see:
WARNING: Using a password on the command line interface can be insecure.
Importing from file '/my.csv' to table `db`.`mytable` in MySQL Server at /var%2Frun%2Fmysqld%2Fmysqld.sock using 3 threads
[Worker000] db.mytable: Records: 351229 Deleted: 0 Skipped: 0 Warnings: 0
[Worker001] db.mytable: Records: 357374 Deleted: 0 Skipped: 0 Warnings: 0
[Worker002] db.mytable: Records: 352552 Deleted: 0 Skipped: 0 Warnings: 0
...
File '/my.csv' (21.11 GB) was imported in 5 min 22.7004 sec at 65.42 MB/s
Total rows affected in db.mytable: Records: 37129973 Deleted: 0 Skipped: 0 Warnings: 0
BUT: echo $logs shows an empty line.
Why is the mysqlsh output not captured here?
With the help above, indeed a 2>&1 at the end fixes the problem:
.../script.js 2>&1)
Related
In short
I need to load several JSON templates to ElasticSearch within filebeat.yaml configuration
I have
Directory with templates:
-rootdir
|
| - templates
|
|- some-template.json
|- some-2-template.json
|- some-3-template.json
Pre-setup properties in filebeat.yaml configuration, like:
setup.template:
json:
enabled: true
path: /rootdir/templates
pattern: "*-template.json"
name: "json-templates"
This is actually a blueprint as I do not know how to load to ElasticSearch all templates, because one template using this config loaded successfully, if I append to path, for example, /some-template.json.
After the starting the Filebeat I've got next error logs:
ERROR [publisher_pipeline_output] pipeline/output.go:154 Failed to connect to backoff(elasticsearch(http://:9200)): Connection marked as failed because the onConnect callback failed: error loading template: error reading file /rootdir/templates for template: read /rootdir/templates: is a directory
Question is
How I can upload multiple files within one property with different index-patterns in each template, so the results after running GET _cat/templates?v=true should be like this:
name index_patterns order version composed_of
some-template [some*] 0 7140099
some-2-template [some-2*] 0 7140099
some-3-template [some-3*] 0 7140099
.monitoring-es [.monitoring-es-7-*] 0 7140099
.monitoring-alerts-7 [.monitoring-alerts-7] 0 7140099
.monitoring-logstash [.monitoring-logstash-7-*] 0 7140099
.monitoring-kibana [.monitoring-kibana-7-*] 0 7140099
.monitoring-beats [.monitoring-beats-7-*] 0 7140099
ilm-history [ilm-history-5*] 2147483647 5 []
.triggered_watches [.triggered_watches*] 2147483647 12 []
.kibana-event-log-7.16.3-template [.kibana-event-log-7.16.3-*] 0 []
.slm-history [.slm-history-5*] 2147483647 5 []
synthetics [synthetics-*-*] 100 1 [synthetics-mappings, data-streams-mappings, synthetics-settings]
metrics [metrics-*-*] 100 1 [metrics-mappings, data-streams-mappings, metrics-settings]
.watch-history-12 [.watcher-history-12*] 2147483647 12 []
.deprecation-indexing-template [.logs-deprecation.*] 1000 1 [.deprecation-indexing-mappings, .deprecation-indexing-settings]
.watches [.watches*] 2147483647 12 []
logs [logs-*-*] 100 1 [logs-mappings, data-streams-mappings, logs-settings]
.watch-history-13 [.watcher-history-13*] 2147483647 13 []
Additionally
I'm running Filebeat and ElasticSearch in Docker using Docker compose, may be it would be helpful somehow
Thank you in advance!
Best Regards, Anton.
i need to know when i update the data and there is change in database, normally affected rows will return 1, and when there is no changes, it will return 0
but i try with both of this:
getManager().createQueryBuilder(Area).update(data).where({ area_id: id }).execute();
getRepository().update(id, data);
but, always return this UpdateResult { generatedMaps: [], raw: [], affected: 1 }
if using raw query:
getManager().connection.query('update area set .... where ....')
it will return this:
ResultSetHeader {
fieldCount: 0,
affectedRows: 1,
insertId: 0,
info: 'Rows matched: 1 Changed: 0 Warnings: 0',
serverStatus: 2,
warningStatus: 0,
changedRows: 0
}
with this i can get the changedRows value to use in next action, but need to generate manual query
driver: mysql, mariadb
using mysql2 node modules
thank you
I've stumbled upon the beforementioned error (disabling IRQ #31) during boot and have tried to resolve it by trying to find out what caused the interrupt.
Running lspci -v | grep 31 gives no result and a cat /proc/interrupts | grep 31 returns:
31: 0 0 0 0 100000 0 0 0 IO-APIC 31-fasteoi tpm0
122: 0 0 0 0 0 0 0 0 PCI-MSI 3162112-edge pciehp
131: 0 0 0 0 0 0 4780 0 PCI-MSI 1572871-edge nvme0q7
136: 0 0 0 0 0 0 377 0 PCI-MSI 520192-edge enp0s31f6
160: 0 0 1 0 0 0 1260 0 PCI-MSI 333831-edge iwlwifi: queue 7
RES: 25967 11451 13108 4439 3441 4165 4926 4203 Rescheduling interrupts
How should I proceed knowing that it maybe has to do something with IO-APIC 31-fasteoi tpm0?
Many thanks.
I ran into a similar problem on my Lenovo L390 (running Fedora 31). In my case the issue reproduced by waking up the laptop when in suspension mode.
For me the issue was resolved (or worked around) by disabling the Trusted Platform Module in the BIOS.
Has very little experience in programming and allmost no expirience in TCL Expect, but forced to use it.
I has output like that:
SMG2016-[CONFIG]-SIP-USERS> add user 1200011 adm.voip.partner.ru 0
Creating new Sip-User.
'SIP USER' [00] ID [1]:
name: Subscriber#000
IPaddr: 0.0.0.0
SIP domain: adm.voip.partner.ru
dynamic registration: off
number: 1200011
Numplan: 0
number list:
00) ---
01) none
02) none
03) none
04) none
AON number:
AON type number: subscriber
profile: 0
category: 1
access cat: 0
auth: none
cliro: off
pbxprofile: 0
access mode: on
lines: 1
No src-port control: off
BLF usage: off
BLF subscribers: 10
Intercom mode: sendonly
Intercom priority: 3
So i need to put in variable 00 from the 'SIP USER' [00] string, and the number in bracers could be up to four digits in row.
How shoud i do this? Any help please?
UPD:
Done this, works for me, even while having trouble with first zero.
expect -indices -re "'SIP USER' .{1}(\[0-9]{2,4}).{1}"
set userid [string trimleft $expect_out(1,string) "0"]
[CLEANUP], Operations, 1
[CLEANUP], AverageLatency(us), 627.0
[CLEANUP], MinLatency(us), 627
[CLEANUP], MaxLatency(us), 627
[CLEANUP], 95thPercentileLatency(ms), 0
[CLEANUP], 99thPercentileLatency(ms), 0
[CLEANUP], 0, 1
[CLEANUP], 1, 0
[CLEANUP], 2, 0
[CLEANUP], 3, 0
[CLEANUP], 4, 0
Taken from YCSB output, what is a database cleanup operation? I am trying to understand why MySQL takes so much longer during this phase than a MongoDB system.
The cleanup operation is needed to shut down the thread(s) which was providing the workload to the db. So, if you set the parameter -thread n, you will see exactly n cleanup operations at the end of the benchmark!