Slick 3.2: Unable to perform upsert operation using mysql - mysql

I am using slick 3.2 for creating simple CRUD application, but while performing upsert operations, the record is not updated. But in logs, it seems like, the record is updated. Following is my code:
private val product = TableQuery[Products]
override def updateProduct(id: Int, row: ProductsRow): Future[Int] = db.run {
product.insertOrUpdate(row)
}
ProductsRow(3,'PULLY','DDD',new TimeStamp(new Data().getTime), 4, 202, -1)
My Logs are:
[debug] s.j.J.statement - Preparing statement: insert into `mae_app`.`products` (`id`,`name`,`code`,`date`,`quantity`,`price`,`company_id`) values (?,?,?,?,?,?,?) on duplicate key update `name`=VALUES(`name`), `code`=VALUES(`code`), `date`=VALUES(`date`), `quantity`=VALUES(`quantity`), `price`=VALUES(`price`), `company_id`=VALUES(`company_id`)
[debug] s.j.J.parameter - /-----+--------+--------+-------------------------+-----+------------+-----\
[debug] s.j.J.parameter - | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
[debug] s.j.J.parameter - | Int | String | String | Timestamp | Int | BigDecimal | Int |
[debug] s.j.J.parameter - |-----+--------+--------+-------------------------+-----+------------+-----|
[debug] s.j.J.parameter - | 3 | PULLY | DDD | 2017-08-03 10:17:47.144 | 4 | 205.0 | -1 |
[debug] s.j.J.parameter - \-----+--------+--------+-------------------------+-----+------------+-----/
The logs seems like, record is updated with new values, but while we are check into database, db still have old record values.
It seems like, generated sql query is not correct like on duplicate key updatename=VALUES(name) statement contains name filed, rather then name field value. I am not able to figure out, what exact problem is? So, how to resolve the problem?

Related

Parsing JSON Payload in Vertica

I am using Vertica DB (and DBeaver as SQL Editor) - I am new to both tools.
I have a view with multiple columns:
someint | xyz | c | json
5 | 1542 | none | {"range":23, "rm": 51, "spx": 30}
5 | 1442 | none | {"range":24, "rm": 50, "spx": 3 }
3 | 1462 | none | {"range":24, "rm": 50, "spx": 30}
(int) | (int) | (Varchar) | (Long Varchar)
I want to create another view (or for the beginning, just be able to query it properly) of the above, but with the "json" column separated into the individual fields/columns "range", "rm" and "spx".
I imagine the output of the query / the new view to be something like the following:
someint | xyz | c | range | rm | spx
5 | 1542 | none | 23 | 51 | 30
5 | 1442 | none | 24 | 50 | 3
....
So far I have not been able to even query the "range" for example.
Hence my questions:
How can I separate the json column key-value structure into individual columns (in a query output)?
How can I transfer the desired output into a new view in Vertica?
I haven't found much help in the documentation as the procedure there is to load json text files from a drive or operate on tables, which I cannot do as I only have access to a view.
I have found a solution, so for anyone else encountering this problem:
SELECT a, xyza, cont,
MAPLOOKUP(MapJSONExtractor(json), 'range') AS range,
MAPLOOKUP(MapJSONExtractor(json), 'rm') AS rm,
MAPLOOKUP(MapJSONExtractor(json), 'spx') AS spx
FROM test;

Hive Table returning empty result set on all queries

I created a Hive Table, which loads data from a text file. But its returning empty result set on all queries.
I tried the following command:
CREATE TABLE table2(
id1 INT,
id2 INT,
id3 INT,
id4 STRING,
id5 INT,
id6 STRING,
id7 STRING,
id8 STRING,
id9 STRING,
id10 STRING,
id11 STRING,
id12 STRING,
id13 STRING,
id14 STRING,
id15 STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '|'
STORED AS TEXTFILE
LOCATION '/user/biadmin/lineitem';
The command gets executed, and the table gets created. But, always returns 0 rows for all queries, including SELECT * FROM table2;
Sample data:
Single line of the input data:
1|155190|7706|1|17|21168.23|0.04|0.02|N|O|1996-03-13|1996-02-12|1996-03-22|DELIVER IN PERSON|TRUCK|egular courts above the|
I have attached the screen shot of the data file.
Output for command: DESCRIBE FORMATTED table2;
| Wed Apr 16 20:18:58 IST 2014 : Connection obtained for host: big-instght-15.persistent.co.in, port number 1528. |
| # col_name data_type comment |
| |
| id1 int None |
| id2 int None |
| id3 int None |
| id4 string None |
| id5 int None |
| id6 string None |
| id7 string None |
| id8 string None |
| id9 string None |
| id10 string None |
| id11 string None |
| id12 string None |
| id13 string None |
| id14 string None |
| id15 string None |
| |
| # Detailed Table Information |
| Database: default |
| Owner: biadmin |
| CreateTime: Mon Apr 14 20:17:31 IST 2014 |
| LastAccessTime: UNKNOWN |
| Protect Mode: None |
| Retention: 0 |
| Location: hdfs://big-instght-11.persistent.co.in:9000/user/biadmin/lineitem |
| Table Type: MANAGED_TABLE |
| Table Parameters: |
| serialization.null.format |
| transient_lastDdlTime 1397486851 |
| |
| # Storage Information |
| SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe |
| InputFormat: org.apache.hadoop.mapred.TextInputFormat |
| OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat |
| Compressed: No |
| Num Buckets: -1 |
| Bucket Columns: [] |
| Sort Columns: [] |
| Storage Desc Params: |
| field.delim | |
+-----------------------------------------------------------------------------------------------------------------+
Thanks!
Please make sure that the location /user/biadmin/lineitem.txt actually exists and you have data present there. Since you are using LOCATION clause your data must be present there, instead of the default warehouse location, /user/hive/warehouse.
Do a quick ls to verify that :
bin/hadoop fs -ls /user/biadmin/lineitem.txt
Also, make sure that you are using the proper delimiter.
Did you tried with LOAD DATA LOCAL INFILE
LOAD DATA LOCAL INFILE'/user/biadmin/lineitem.txt' INTO TABLE table2
FIELDS TERMINATED BY '|'
LINES TERMINATED BY '\n'
(id1,id2,id3........);
Documentation: http://dev.mysql.com/doc/refman/5.1/en/load-data.html
Are you using managed table or external table?? If it is external table you should use external keyword while creating the table. after creating the table load data into the table using load command. if it is managed table, after loading data to the table, you could see the data in your hive warehouse directory in hadoop. the default path is "/user/hive/warehouse/yourtablename".
you should run the load command in the hive shell.
I was able to load the data to the table. The problem was:
LOCATION '/user/biadmin/lineitem';
wasn't loading any data. But when I gave the directory containing the file as the path like:
LOCATION '/user/biadmin/tpc-h';
where I put the lineite.txt file in the tpc-h directory.
It worked!

MySQL to Redis - Import and Model

I'm thinking to use Redis to cache some user data snapshot(s) in order to speed up the access to that data (one of the reasons is because my MySQL table(s) suffer of lock contention) and I'm looking for the best way to import in one step a table like this(which may contain from a few record to millions of records):
mysql> select * from mytable where snapshot = 1133;
+------+--------------------------+----------------+-------------------+-----------+-----------+
| id | email | name | surname | operation | snapshot |
+------+--------------------------+----------------+-------------------+-----------+-----------+
| 2989 | example-2989#example.com | fake-name-2989 | fake-surname-2989 | 2 | 1133 |
| 2990 | example-2990#example.com | fake-name-2990 | fake-surname-2990 | 10 | 1133 |
| 2992 | example-2992#example.com | fake-name-2992 | fake-surname-2992 | 5 | 1133 |
| 2993 | example-2993#example.com | fake-name-2993 | fake-surname-2993 | 5 | 1133 |
| 2994 | example-2994#example.com | fake-name-2994 | fake-surname-2994 | 9 | 1133 |
| 2995 | example-2995#example.com | fake-name-2995 | fake-surname-2995 | 7 | 1133 |
| 2996 | example-2996#example.com | fake-name-2996 | fake-surname-2996 | 1 | 1133 |
+------+--------------------------+----------------+-------------------+-----------+-----------+
into the Redis key-value store.
I can have many "snapshots" to load into Redis, and the basic access pattern is (SQL like syntax)
select * from mytable where snapshot = ? and id = ?
these snapshots can also coming from others table, so the "global unique ID per snapshot" is the column snapshot, ex:
mysql> select * from my_other_table where snapshot = 1134;
+------+--------------------------+----------------+-------------------+-----------+-----------+
| id | email | name | surname | operation | snapshot |
+------+--------------------------+----------------+-------------------+-----------+-----------+
| 2989 | example-2989#example.com | fake-name-2989 | fake-surname-2989 | 1 | 1134 |
| 2990 | example-2990#example.com | fake-name-2990 | fake-surname-2990 | 8 | 1134 |
| 2552 | example-2552#example.com | fake-name-2552 | fake-surname-2552 | 5 | 1134 |
+------+--------------------------+----------------+-------------------+-----------+-----------+
The loaded snapshot into redis never change, they are available only for a week via TTL
There is a way to load in one step this kind of data(rows and columns) into redis combining redis-cli --pipe and HMSET?
What is the best model to use in redis in order to store/get this data (thinking at the access pattern)?
I have found the redis-cli --pipe Redis Mass Insertion (and also MySQL to Redis in One Step) but I can't figure out the best way to achieve my requirements (load from mysql in one step all rows/colums, best redis model for this) using HMSET
Thanks in advance
Cristian.
Model
To be able to query your data from Redis the same way as:
select * from mytable where snapshot = ?
select * from mytable where id = ?
You'll need the model below.
Note: select * from mytable where snapshot = ? and id = ? does not make a lot of sense here, since it's the same as select * from mytable where id = ?.
Key type and naming
[Key Type] [Key name pattern]
HASH d:{id}
ZSET d:ByInsertionDate
SET d:BySnapshot:{id}
Note: I used d: as a namespace but you may want to rename it with the name of your domain model.
Data insertion
Insert a new line from Mysql into Redis:
hmset d:2989 id 2989 email example-2989#example.com name fake-name-2989 ... snapshot 1134
zadd d:ByInsertionDate {current_timestamp} d:2989
sadd d:BySnapshot:1134 d:2989
Another example:
hmset d:2990 id 2990 email example-2990#example.com name fake-name-2990 ... snapshot 1134
zadd d:ByInsertionDate {current_timestamp} d:2990
sadd d:BySnapshot:1134 d:2990
Cron
Here is the algorithm that must be run each day or week depending on your requirements:
for key_name in redis(ZREVRANGEBYSCORE d:ByInsertionDate -inf {timestamp_one_week_ago})
// retrieve the snapshot id from d:{id}
val snapshot_id = redis(hget {key_name} snapshot)
// remove the hash (d:{id})
redis(del key_name)
// remove the hash entry from the set
redis(srem d:BySnapshot:{snapshot_id} {key_name})
// clean the zset from expired keys
redis(zremrangebyscore d:ByInsertionDate -inf {timestamp_one_week_ago})
Usage
select * from my_other_table where snapshot = 1134; will be either:
{snapshot_id} = 1134
for key_name in redis(smembers d:BySnapshot:{snapshot_id})
print(redis(hgetall {keyname}))
or write a lua script to do this directly on redis side. Finally:
select * from my_other_table where id = 2989; will be:
{id} = 2989
print(redis(hgetall d:{id}))
Import
This part is quite easy, just read the table and follow the above model. Depending on your requirements you may want to import all (or a part of) your data with an hourly/daily/weekly cron.

How to map mysql table to Grails domain class?

I have mysql table named anto2. Which has only one column name of varchar 100. I tried to map this table in grails domain class :
class Anto {
String grailsName
static constraints = {
}
static mapping = {
table 'anto2'
grailsName column: 'name'
}
}
After I did run-app I can see my table have been added with two more columns :
+---------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+---------+--------------+------+-----+---------+-------+
| name | varchar(100) | YES | | NULL | |
| id | bigint(20) | NO | | NULL | |
| version | bigint(20) | NO | | NULL | |
+---------+--------------+------+-----+---------+-------+
I generated static view and controller for this Domain class and when I tried to save it , I get an error in my save() method (which is generated using generate-all command). The error is as follows :
2012-07-09 23:05:26,391 [http-bio-8080-exec-2] ERROR errors.GrailsExceptionResolver - SQLException occurred when processing request: [POST] /mysql/anto/save - parameters:
create: Create
Field 'id' doesn't have a default value. Stacktrace follows:
Message: Field 'id' doesn't have a default value
Line | Method
->> 1073 | createSQLException in com.mysql.jdbc.SQLError
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
| 3597 | checkErrorPacket in com.mysql.jdbc.MysqlIO
| 3529 | checkErrorPacket . in ''
| 1990 | sendCommand in ''
| 2151 | sqlQueryDirect . . in ''
| 2625 | execSQL in com.mysql.jdbc.ConnectionImpl
| 2119 | executeInternal . in com.mysql.jdbc.PreparedStatement
| 2415 | executeUpdate in ''
| 2333 | executeUpdate . . in ''
| 2318 | executeUpdate in ''
| 105 | executeUpdate . . in org.apache.commons.dbcp.DelegatingPreparedStatement
| 25 | save in mnm.AntoController
| 1110 | runWorker . . . . in java.util.concurrent.ThreadPoolExecutor
| 603 | run in java.util.concurrent.ThreadPoolExecutor$Worker
^ 722 | run . . . . . . . in java.lang.Thread
Why does this happen? Where I went wrong?
If you don't specify your kind of id generator, GORM will use the native strategy of your database.If you're using mysql as db, it will create your ids using IDENTITY strategy to generate them. This strategy requires to have an auto increment on your id column.
Although I think defining a strategy is better. You can use the SEQUENCE strategy:
id column:'id_anto2', generator:'sequence', params:[sequence:'tab_anto2_seq']
and then create a sequence with that same name in your database (so you don't have to use the one Hibernate creates for you automatically). It's very easy and works like a charm.
Also, to discard the version field, insert version false in your mapping block. So, it would be like:
static mapping = {
table 'anto2'
id column: 'id_anto2', generator: 'sequence', params: [sequence:'tab_anto2_seq']
grailsName column: 'name'
version false
}
It looks like you are missing id mapping. Try adding an id reference to your static mapping closure. Also, if this is an existing database you might want to set the version to false or grails might try to alter the table with a version column, depending on how you have datasource.groovy set up.
table 'anto2'
version false
id column:'anto2_ID'
grailsName column: 'name'

Set cumulative value when faced with overflowing Int16

I have cumulative input values that start life as smallints.
I read these values from a Access database, and aggregate them into a MySQL database.
Now I'm faced with input values of type smallint that are cumulative, thus always increasing.
Input Required output
---------------------------------
0 0
10000 10000
32000 32000
-31536 34000 //overflow in the input
-11536 54000
8464 74000
I process these values by inserting the raw data into a blackhole table and in the trigger to the blackhole I upgrade the data before inserting it into the actual table.
I know how to store the previous input and output, or if there is none, how to select the latest (and highest) inserted value.
But what's the easiest/fastest way to deal with the overflow, so I get the correct output.
Given you have a table named test with a primary key called id and the column is named value Then just do this:
SELECT
id,
test.value,
(SELECT SUM(value) FROM test AS a WHERE a.id <= test.id) as output
FROM test;
This would be the output:
------------------------
| id | value | output |
------------------------
| 1 | 10000 | 10000 |
| 2 | 32000 | 42000 |
| 3 | -31536 | 10464 |
| 4 | -11536 | -1072 |
| 5 | 8464 | 7392 |
------------------------
Hope this helps.
If it doesn't work, just convert your data to INT (or BIGINT for lots of data). It does not hurt and memory is cheap this days.