Creating tables in InfluxDB via Terminal - mysql

Are there tutorials online that teaches you how to create tables and input values in InfluxDB? How would you create a table and insert values into them?

InfluxDB doesn't really have the concept of a table. Data is structured into series, which is composed of measurements, tags, and fields.
Measurements are like buckets.
Tags are indexed values.
Fields are the actual data.
Data is written into InfluxDB via line protocol. The structure of line protocol is as follows
<measurement>,<tag>[,<tags>] <field>[,<field>] <timestamp>
An example of a point in line protocol:
weather,location=us-midwest temperature=82 1465839830100400200
To insert data into the database you'll need to issue an HTTP POST request to the /write endpoint, specifying the db query parameter.
For example:
curl -XPOST http://localhost:8086/write?db=mydb --data-binary "weather,location=us-midwest temperature=82 1465839830100400200"
For more information see the Getting Started section of the InfluxDB docs.

I just want to quote the moderator of the influxdata community here:
You can think of
measurements as tables in SQL,
tags as indexed columns,
and fields as unindexed columns

Also, there is no "create table" statement. Just insert to a table. The web call was specified above. If you have the "influx" command line interpreter, you can do:
export INFLUX_PASSWORD="BlahBlahBlah"
influx -host <hostname> -u <username> -d <database>
insert my_influx_test_measurement,index1="aaa" value1="bbb"
Note that "insert" is only a command line (aka "influx") thing. Doesn't work with http calls.
It's to bad they named the command line interpreter "influx". Now when anyone refers to "influx", it's not clear if it's the database or the CLI.

Related

DBIC schema loader: how to skip columns

I have a MySQL DB with multiple tables and views on those tables. A view limits what can be seen to a single customer's data (create view ... where customer_id = X). The Catalyst app will be talking to these views, not to the actual tables. The only difference between the view's columns and the underlying tables' ones is that the view lacks the customer_id column (i.e. to the application it seems like the current customer is the only one in the system).
The problem is, I cannot use DBIC Schema Loader to load the schema from the views, as they lack all the relations and keys. I have to load the schema from the base tables and then use it on the views. The problems is, I cannot get rid of that customer_id column. I need to get rid of it, because it is not present in the view that the application will be talking with.
I ended up using the filter_generated_code option to strip the unneeded bits away from the generated code, but then I get the following error during generation:
DBIx::Class::Schema::Loader::make_schema_at(): No such column customer_id
at /opt/merp/perl/lib/perl5/Catalyst/Helper/Model/DBIC/Schema.pm line 635
How can I have the loader skip certain columns at load time?
I'm not sure how you can get the loader to skip columns at load time, but you can remove them after load. For example, you can add something like this to any class which needs a column removed:
__PACKAGE__->remove_column('customer_id');
I'm not sure if there is an option for that in DBIC::Schema::Loader, the docs will tell you. If there isn't just generate the schema and then remove the column definition.
But besides that you seem to be missing a major feature of DBIC: ResultSet chaining.
If you're using e.g. Catalyst you'd have an action that filters your ResultSet on the stash based on the customer id and all chained sub actions would only ever see the allowed rows.
I ended up just leaving the column for what it is, so it is visible to the application code. The DB views and triggers ensure the application can only insert and select the currently set customer id. The only trick I employed was using filter_generated_code to replace the underlying table name with the view name (just stripping a leading underscore). This way I now have a script that does a show tables, filters out the views, dumps the structure into the DBIC classes, replacing the table name with the view name, looking somewhat like this:
exclude=`mysql -u user -ppassword -D db --execute='show tables' \
--silent --skip-column-names | egrep "^_" | sed "s/^_//g" | \
sed ':a;N;$!ba;s/\n/|/g'`
perl script/proj_create.pl model DB DBIC::Schema Proj::Schema \
create=static components=TimeStamp filter_generated_code=\
'sub { my ($type,$class,$text) = #_; $text =~ s/([<"])_/$1/g; return $text; } ' \
exclude="^($exclude)$" dbi:mysql:db 'user' 'password' quote_names=1 '{AutoCommit => 1}'

Using mysqldump to format one insert per line?

This has been asked a few times but I cannot find a resolution to my problem. Basically when using mysqldump, which is the built in tool for the MySQL Workbench administration tool, when I dump a database using extended inserts, I get massive long lines of data. I understand why it does this, as it speeds inserts by inserting the data as one command (especially on InnoDB), but the formatting makes it REALLY difficult to actually look at the data in a dump file, or compare two files with a diff tool if you are storing them in version control etc. In my case I am storing them in version control as we use the dump files to keep track of our integration test database.
Now I know I can turn off extended inserts, so I will get one insert per line, which works, but any time you do a restore with the dump file it will be slower.
My core problem is that in the OLD tool we used to use (MySQL Administrator) when I dump a file, it does basically the same thing but it FORMATS that INSERT statement to put one insert per line, while still doing bulk inserts. So instead of this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES (887,'0.0000'),191607,'1.0300');
you get this:
INSERT INTO `coupon_gv_customer` (`customer_id`,`amount`) VALUES
(887,'0.0000'),
(191607,'1.0300');
No matter what options I try, there does not seem to be any way of being able to get a dump like this, which is really the best of both worlds. Yes, it take a little more space, but in situations where you need a human to read the files, it makes it MUCH more useful.
Am I missing something and there is a way to do this with MySQLDump, or have we all gone backwards and this feature in the old (now deprecated) MySQL Administrator tool is no longer available?
Try use the following option:
--skip-extended-insert
It worked for me.
With the default mysqldump format, each record dumped will generate an individual INSERT command in the dump file (i.e., the sql file), each on its own line. This is perfect for source control (e.g., svn, git, etc.) as it makes the diff and delta resolution much finer, and ultimately results in a more efficient source control process. However, for significantly sized tables, executing all those INSERT queries can potentially make restoration from the sql file prohibitively slow.
Using the --extended-insert option fixes the multiple INSERT problem by wrapping all the records into a single INSERT command on a single line in the dumped sql file. However, the source control process becomes very inefficient. The entire table contents is represented on a single line in the sql file, and if a single character changes anywhere in that table, source control will flag the entire line (i.e., the entire table) as the delta between versions. And, for large tables, this negates many of the benefits of using a formal source control system.
So ideally, for efficient database restoration, in the sql file, we want each table to be represented by a single INSERT. For an efficient source control process, in the sql file, we want each record in that INSERT command to reside on its own line.
My solution to this is the following back-up script:
#!/bin/bash
cd my_git_directory/
ARGS="--host=myhostname --user=myusername --password=mypassword --opt --skip-dump-date"
/usr/bin/mysqldump $ARGS --database mydatabase | sed 's$VALUES ($VALUES\n($g' | sed 's$),($),\n($g' > mydatabase.sql
git fetch origin master
git merge origin/master
git add mydatabase.sql
git commit -m "Daily backup."
git push origin master
The result is a sql file INSERT command format that looks like:
INSERT INTO `mytable` VALUES
(r1c1value, r1c2value, r1c3value),
(r2c1value, r2c2value, r2c3value),
(r3c1value, r3c2value, r3c3value);
Some notes:
password on the command line ... I know, not secure, different discussion.
--opt: Among other things, turns on the --extended-insert option (i.e., one INSERT per table).
--skip-dump-date: mysqldump normally puts a date/time stamp in the sql file when created. This can become annoying in source control when the only delta between versions is that date/time stamp. The OS and source control system will date/time stamp the file and version. Its not really needed in the sql file.
The git commands are not central to the fundamental question (formatting the sql file), but shows how I get my sql file back into source control, something similar can be done with svn. When combining this sql file format with your source control of choice, you will find that when your users update their working copies, they only need to move the deltas (i.e., changed records) across the internet, and they can take advantage of diff utilities to easily see what records in the database have changed.
If you're dumping a database that resides on a remote server, if possible, run this script on that server to avoid pushing the entire contents of the database across the network with each dump.
If possible, establish a working source control repository for your sql files on the same server you are running this script from; check them into the repository from there. This will also help prevent having to push the entire database across the network with every dump.
As others have said using sed to replace "),(" is not safe as this can appear as content in the database.
There is a way to do this however:
if your database name is my_database then run the following:
$ mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database > my_database.sql
$ sed ':a;N;$!ba;s/)\;\nINSERT INTO `[A-Za-z0-9$_]*` VALUES /),\n/g' my_database.sql > my_database2.sql
you can also use "sed -i" to replace in-line.
Here is what this code is doing:
--skip-extended-insert will create one INSERT INTO for every row you have.
Now we use sed to clean up the data. Note that regular search/replace with sed applies for single line so we cannot detect the "\n" character as sed works one line at a time. That is why we put ":a;N;$!ba;" which basically tells sed to search multi-line and buffer the next line.
Hope this helps
What about storing the dump into a CSV file with mysqldump, using the --tab option like this?
mysqldump --tab=/path/to/serverlocaldir --single-transaction <database> table_a
This produces two files:
table_a.sql that contains only the table create statement; and
table_a.txt that contains tab-separated data.
RESTORING
You can restore your table via LOAD DATA:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_a FIELDS TERMINATED BY '\t' ...
LOAD DATA is usually 20 times faster than using INSERT statements.
If you have to restore your data into another table (e.g. for review or testing purposes) you can create a "mirror" table:
CREATE TABLE table_for_test LIKE table_a;
Then load the CSV into the new table:
LOAD DATA INFILE '/path/to/serverlocaldir/table_a.txt'
INTO TABLE table_for_test FIELDS TERMINATED BY '\t' ...
COMPARE
A CSV file is simplest for diffs or for looking inside, or for non-SQL technical users who can use common tools like Excel, Access or command line (diff, comm, etc...)
I'm afraid this won't be possible. In the old MySQL Administrator I wrote the code for dumping db objects which was completely independent of the mysqldump tool and hence offered a number of additional options (like this formatting or progress feedback). In MySQL Workbench it was decided to use the mysqldump tool instead which, besides being a step backwards in some regards and producing version problems, has the advantage to stay always up-to-date with the server.
So the short answer is: formatting is currently not possible with mysqldump.
Try this:
mysqldump -c -t --add-drop-table=FALSE --skip-extended-insert -uroot -p<Password> databaseName tableName >c:\path\nameDumpFile.sql
I found this tool very helpful for dealing with extended inserts: http://blog.lavoie.sl/2014/06/split-mysqldump-extended-inserts.html
It parses the mysqldump output and inserts linebreaks after each record, but still using the faster extended inserts. Unlike a sed script, there shouldn't be any risk of breaking lines in the wrong place if the regex happens to match inside a string.
I liked Ace.Di's solution with sed, until I got this error:
sed: Couldn't re-allocate memory
Thus I had to write a small PHP script
mysqldump -u my_db_user -p -h 127.0.0.1 --skip-extended-insert my_database | php mysqlconcatinserts.php > db.sql
The PHP script also generates a new INSERT for each 10.000 rows, again to avoid memory problems.
mysqlconcatinserts.php:
#!/usr/bin/php
<?php
/* assuming a mysqldump using --skip-extended-insert */
$last = '';
$count = 0;
$maxinserts = 10000;
while($l = fgets(STDIN)){
if ( preg_match('/^(INSERT INTO .* VALUES) (.*);/',$l,$s) )
{
if ( $last != $s[1] || $count > $maxinserts )
{
if ( $count > $maxinserts ) // Limit the inserts
echo ";\n";
echo "$s[1] ";
$comma = '';
$last = $s[1];
$count = 0;
}
echo "$comma$s[2]";
$comma = ",\n";
} elseif ( $last != '' ) {
$last = '';
echo ";\n";
}
$count++;
}
add
set autocommit=0;
to first line of your sql script file, then import by:
mysql -u<user> -p<password> --default-character-set=utf8 db_name < <path>\xxx.sql
, it will fast 10x.

MySql to PostgreSql migration

My PostgreSQL is installed on Windows. How can I migrate data from MySQL database to PostgreSQL?
I've read tons of aricles. Nothing helps :(
Thanks.
My actions:
mysql dump:
mysqldump -h 192.168.0.222 --port 3307 -u root -p --compatible=postgresql synchronizer > c:\dump.sql
create db synchronizer at pgsql
import dump:
psql -h 192.168.0.100 -d synchronizer -U postgres -f C:\dump.sql
output:
psql:C:/dump.sql:17: NOTICE: table "Db_audit" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:30: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(11) NOT NULL,
^
psql:C:/dump.sql:37: ERROR: syntax error at or near ""Db_audit""
СТРОКА 1:LOCK TABLES "Db_audit" WRITE;
^
psql:C:/dump.sql:39: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (4068,4036,4,1,32,'2010-02-04 ...
^
psql:C:/dump.sql:40: ERROR: relation "Db_audit" does not exist
СТРОКА 1:INSERT INTO "Db_audit" VALUES (19730,2673,2,2,44,'2010-11-23...
^
psql:C:/dump.sql:42: ERROR: syntax error at or near "UNLOCK"
СТРОКА 1:UNLOCK TABLES;
^
psql:C:/dump.sql:48: NOTICE: table "ZHNVLS" does not exist, skipping
DROP TABLE
psql:C:/dump.sql:68: ERROR: syntax error at or near "("
СТРОКА 2: "id" int(10) unsigned NOT NULL,
^
psql:C:/dump.sql:75: ERROR: syntax error at or near ""ZHNVLS""
СТРОКА 1:LOCK TABLES "ZHNVLS" WRITE;
^
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...???????? ??? ???????','10','4607064820115','0','','??????-??...
^
ПОДСКАЗКА: Use the escape string syntax for escapes, e.g., E'\r\n'.
Cancel request sent
psql:C:/dump.sql:77: WARNING: nonstandard use of escape in a string literal
СТРОКА 1:...??????????? ????????','10','4602784001189','0','','???????? ...
My experience with MySQL -> Postgresql migration wasn't really pleasant, so I'd have to second Daniel's suggestion about CSV files.
In my case, I recreated the schema by hands and then imported all tables, one-by-one, using mysqldump and pg_restore.
So, while this dump/restore may work for the data, you are most likely out of luck with schema. I haven't tried any commercial solutions, so see what other people say and... good luck!
UPDATE: I looked at the code the process left behind and here is how I actually did it.
I had a little different schema in my PostgreSQL db, so some tables were joined, some were split. This is why straightforward import was not an option and my case is probably more complex than what you describe and this solution may be an overkill.
For each table in PG database I wrote a query that selects the relevant data from MySQL database. In case the table is basically the same in both databases, and there are no joins it can be as simple as this
select * from mysql_table_name
Then I exported results of this query to XML, to do this you need to run it like this:
echo "select * from mysql_table_name" | mysql [CONNECTION PARAMETERS] -X --default-character-set=utf8 > mysql_table_name.xml
This will create a simple XML file with the following structure:
<resultset statement="select * from mysql_table_name">
<row>
<field name="some_field">field_value</field>
...
</row>
...
</resultset>
Then, I wrote a script, that produces INSERT statement for each row element in this XML file. The name of the table, where to insert the data was given as a command line parameter to this script. Python script, in case you need it.
These sql statements were written to a file, and then fed to psql like this:
psql [CONNECTION PARAMETERS] -f FILENAME -1
The only trick there was in XML -> SQL transformation is to recognize numbers, and unquote them.
To sum it up: mysql can produce query results as XML and you can use it.
It's a bit more complicated than that. There is plenty of documentation here:
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#MySQL
There, you'll also find conversion scripts.
In my rather simple case (30 tables, 10000 records), I used a perl script:
http://pgfoundry.org/frs/?group_id=1000198
It chugged through the mysql dump file and produced a pg dump file, with the following issues.
I was importing to Heroku so I used their pgbackups plugin which worked almost flawlessly.
Issues to watch for
Boolean data types. MySQL stores these as 0 and 1. PostGreSQL stores them as t and f. Watch that the booleans dont get migrated as integers.
Auto incrementing IDs. You may find your ids start counting again from 1. You'll get errors like this: "duplicate key value violates unique constraint ...". It's easy to fix, but watch out for it.
I've used py-mysql2pgsql for converting a big MySQL database into Postgres. It handles most cases very well. I had to patch it for couple of cases specific to my needs though.
https://pypi.python.org/pypi/py-mysql2pgsql
By default, it reads data from MySQL and writes to Postgres. But you can ask it to write the schema and/or data to a file for inspecting before loading into Postgres.
You can use https://github.com/mihailShumilov/mysql2postgresql
This is wroted on PHP convertor
There's also a very nice (fork of a) python converter that is maintained by the gitlab creators:
https://github.com/gitlabhq/mysql-postgresql-converter
The original fork is for this project is stale. For me, everything worked perfectly using this script.
Here there is a project which migrates in couple commands your current MySQL database to Postgresql including indexes, and foreign keys. Also it allows to define name, indexes and column type parsings so you can overwrite default behavior.
https://github.com/ggarri/mysql2psql
I hope it could be useful for anyone of you who is interested in migrating his current project to PG, in our case we obtained around 20% performance increase.
It is much better to use some program, that automates the process of migration.
Even if you familiar with all gotchas, doing every step by hand may take a lot of time, especially when your db is "big".
Try FromMySqlToPostgreSql.
This tool is feature-reach and easy to use.
It maps data-types, migrates constraints, indexes, PKs and FKs exactly as they were in your MySQL db.
Under the hood it uses PostgreSQL COPY, so data transfer is very fast.

Using shell script to insert data into remote MYSQL database

I've been trying to get a shell(bash) script to insert a row into a REMOTE database, but I've been having some trouble :(
The script is meant to upload a file to a server, get a URL, HASH, and a file size, connect to a remote mysql database, and insert the data into an existing table. I've gotten it working until the remote MYSQL database bit.
It looks like this:
#!/bin/bash
zxw=randomtext
description=randomtext2
for file in "$#"
do
echo -n *****
ident= *****
data= ****
size=` ****
hash=`****
mysql --host=randomhost --user=randomuser --password=randompass randomdb
insert into table (field1,field2,field3) values('http://www.example.com/$hash','$file','$size');
echo "done"
done
I'm a total noob at programming so yeah :P
Anyway, I added the \ to escape the brackets as I was getting errors. As it is right now, the script is works fine until connects to the mysql database. It just connects to the mysql database and doesn't do the insert command (and I don't even know if the insert command would work in bash).
PS: I've tried both the mysql commands from the command line one by one, and they worked, though I defined the hash/file/size and didn't have the escaping "".
Anyway, what do you guys think? Is what I'm trying to do even possible? If so how?
Any help would be appreciated :)
The insert statement has to be sent to mysql, not another line in the shell script, so you need to make it a "here document".
mysql --host=randomhost --user=randomuser --password=randompass randomdb << EOF
insert into table (field1,field2,field3) values('http://www.site.com/$hash','$file','$size');
EOF
The << EOF means take everything before the next line that contains nothing but EOF (no whitespace at the beginning) as standard input to the program.
This might not be exactly what you are looking for but it is an option.
If you want to bypass the annoyance of actually including your query in the sh script, you can save the query as .sql file (useful sometimes when the query is REALLY big and complicated). This can be done with simple file IO in whatever language you are using.
Then you can simply include in your sh scrip something like:
mysql -u youruser -p yourpass -h remoteHost < query.sql &
This is called batch mode execution. Optionally, you can include the ampersand at the end to ensure that that line of the sh script does not block.
Also if you are concerned about the same data getting entered multiple times and your rdbms getting inconsistent, you should explore MySql transactions (commit, rollback, etc).
Don't use raw SQL from bash; bash has no sane facility for sanitizing the data beforehand. Generate a CSV file and upload that instead.

Validate status of URLs in a mysql Database using CURL and shell script

Good day,
I have a simple MySQL database with 1 table and 3 fields
Table: LINKS
Fields: ID URL STATUS
The table has about 3 millions links.
I would like to check all the URLs and post their returned status in the status field so that I can remove the dead links later.
This would probably require a shell script because it will need to run for a long time.
I think CURL headers may provide the best method for checking the status code, but I don't know how to put this all together. Any help on the above or a suggestion for a better way to handle this would be greatly appreciated.
Thank you.
I would rather do this in batches by, say, thousand and instead doing this in bash, I'd do it in PHP or Perl (or any other scripting language of your choice, e.g. Python).
PHP has fopen which would do the job of CURL so you don't have to spawn a separate syscall for each link check. MySQL connectivity is almost native in both PHP and Perl, too.
Following script can help you in getting the status,haven't done sql with this so:
for URL in //get urls from mysql
do
STATUS=$(curl -s -o /dev/null -w '%{http_code}' $URL)
//set status value in "status" in mysql
done