I found this answer that solves to one field -> Inserting multiple values into table with anorm
var fields: List[String] = Nil
var values: List[(String,ParameterValue[_])] = Nil
for ((username,i) <- usernames.zipWithIndex) {
fields ::= "({username%s})".format(i)
values ::= ("username" + i, username)
}
SQL("INSERT INTO users (username) VALUES %s".format(fields.mkString(",")))
.on(values: _*)
.executeUpdate()
How can I pass more fields, like username, address, phonenumber, etc?
I tried ...
def create(names: List[(String,ParameterValue[_])] ,addresses :List[(String,ParameterValue[_])]){
var fields: List[String] = Nil;
for((a,i) <- names.zipWithIndex){
fields ::= "({name%s},{address%s})".format(i)
}
DB.withConnection { implicit c =>
SQL("insert into table (name,address) values %s".format(fields.mkString(",")))
.on(names: _*, addresses: _*)
.executeUpdate()
}
}
I get the following error:
" no "_ *" annotation allowed here"
If I could use one single list to all parameters it'll even better.
You basically want to perform a batch insert. Here's an adaptation taken from the docs:
import anorm.BatchSql
val batch = BatchSql(
"INSERT INTO table (name, address) VALUES({username}, {address})",
Seq(names, addresses)
)
val res: Array[Int] = batch.execute() // array of update count
Related
I have a table that looks like this:
And the JSON has dynamic keys and looks like this:
{
key_1:{
value_1:a
value_2:b
},
key_2:{
value_1:c
value_2:d
}
}
I need to parse this table to get an output that looks like this:
Tried it with JS functions but couldn't get it quite right.
Thanks in advance! :)
Consider below approach
create temp function get_keys(input string) returns array<string> language js as """
return Object.keys(JSON.parse(input));
""";
create temp function get_values(input string) returns array<string> language js as """
return Object.values(JSON.parse(input));
""";
create temp function get_leaves(input string) returns string language js as '''
function flattenObj(obj, parent = '', res = {}){
for(let key in obj){
let propName = parent ? parent + '.' + key : key;
if(typeof obj[key] == 'object'){
flattenObj(obj[key], propName, res);
} else {
res[propName] = obj[key];
}
}
return JSON.stringify(res);
}
return flattenObj(JSON.parse(input));
''';
create temp table temp as (
select format('%t', t) row_id, date, name, val,
split(key, '.')[offset(0)] as key,
split(key, '.')[offset(1)] as col,
from your_table t, unnest([struct(get_leaves(json_extract(json, '$')) as leaves)]),
unnest(get_keys(leaves)) key with offset
join unnest(get_values(leaves)) val with offset using(offset)
);
execute immediate (
select '''
select * except(row_id) from temp
pivot (any_value(val) for col in ("''' || string_agg(distinct col, '","') || '"))'
from temp
);
if applied to sample data in your question - output is
I have the follow:
def getIds(name: String): java.sql.Array = {
val ids: Array[Integer] = Array()
val ps: PreparedStatement = connection.prepareStatement("SELECT id FROM table WHERE name = ?")
ps.setString(1, name)
val resultSet = ps.executeQuery()
while(resultSet.next()) {
val currentId = resultSet.getInt(1)
ids :+ currentId
}
return connection.createArrayOf("INTEGER", ids.toArray)
}
My intention is to use this method output to put into another PreparedStatement using .setArray(1, <array>)
But I'm getting the follow error: java.sql.SQLFeatureNotSupportedException
I'm using MySQL. Already tried INTEGER, INT, BIGINT. No success with none of then.
Researching more found this:
It seems that MySQL doesn't have array variables. May U can try temporary tables instead of array variables
So my solution was to create a temp table with just ids:
val idsStatement = connection.prepareStatement(
"CREATE TEMPORARY TABLE to_delete_ids SELECT id FROM table WHERE name = ?")
idsStatement.setString(1, name)
idsStatement.executeUpdate()
Than do inner join with other statments/queries to achieve same result:
val statementDeleteUsingIds = connection.prepareStatement(
"DELETE to_delete_rows FROM table2 to_delete_rows INNER JOIN to_delete_ids tdi ON tdi.id = to_delete_rows.other_tables_id")
statementDeleteUsingIds.executeUpdate()
In my spark job, I'm using jdbc batch processing to insert records into MySQL. But I noticed that all the records were not making it into MySQL. For example;
//count records before insert
println(s"dataframe: ${dataframe.count()}")
dataframe.foreachPartition(partition => {
Class.forName(jdbcDriver)
val dbConnection: Connection = DriverManager.getConnection(jdbcUrl, username, password)
var preparedStatement: PreparedStatement = null
dbConnection.setAutoCommit(false)
val batchSize = 100
partition.grouped(batchSize).foreach(batch => {
batch.foreach(row => {
val productName = row.getString(row.fieldIndex("productName"))
val quantity = row.getLong(row.fieldIndex("quantity"))
val sqlString =
s"""
|INSERT INTO myDb.product (productName, quantity)
|VALUES (?, ?)
""".stripMargin
preparedStatement = dbConnection.prepareStatement(sqlString)
preparedStatement.setString(1, productName)
preparedStatement.setLong(2, quantity)
preparedStatement.addBatch()
})
preparedStatement.executeBatch()
dbConnection.commit()
preparedStatement.close()
})
dbConnection.close()
})
I see 650 records in the dataframe.count but when I checked mysql, I see 195 records. And this is deterministic. I tried different batch sizes and still see the same number. But when I moved preparedStatement.executeBatch() inside the batch.foreach() i.e. the next line right after preparedStatement.addBatch(), I see the full 650 records in mysql..which isnt batching the insert statements anymore as its executing it immediately after adding it within a single iteration. What could be the issue preventing batching the queries?
It seems you're creating a new preparedStatement in each iteration, which means preparedStatement.executeBatch() is applied to the last batch only i.e. 195 instead of 650 records. Instead, you should create one preparedStatement then substitute the parameters in the iteration, like this:
dataframe.foreachPartition(partition => {
Class.forName(jdbcDriver)
val dbConnection: Connection = DriverManager.getConnection(jdbcUrl, username, password)
val sqlString =
s"""
|INSERT INTO myDb.product (productName, quantity)
|VALUES (?, ?)
""".stripMargin
var preparedStatement: PreparedStatement = dbConnection.prepareStatement(sqlString)
dbConnection.setAutoCommit(false)
val batchSize = 100
partition.grouped(batchSize).foreach(batch => {
batch.foreach(row => {
val productName = row.getString(row.fieldIndex("productName"))
val quantity = row.getLong(row.fieldIndex("quantity"))
preparedStatement = dbConnection.prepareStatement(sqlString)
preparedStatement.setString(1, productName)
preparedStatement.setLong(2, quantity)
preparedStatement.addBatch()
})
preparedStatement.executeBatch()
dbConnection.commit()
preparedStatement.close()
})
dbConnection.close()
})
I have a case class as below:
case class PowerPlantFilter(
powerPlantType: Option[PowerPlantType],
powerPlantName: Option[String],
orgName: Option[String],
page: Int,
onlyActive: Boolean
)
My Table mapping looks like this:
class PowerPlantTable(tag: Tag) extends Table[PowerPlantRow](tag, "powerPlant") {
def id = column[Int]("powerPlantId", O.PrimaryKey)
def orgName = column[String]("orgName")
def isActive = column[Boolean]("isActive")
def minPower = column[Double]("minPower")
def maxPower = column[Double]("maxPower")
def powerRampRate = column[Option[Double]]("rampRate")
def rampRateSecs = column[Option[Long]]("rampRateSecs")
def powerPlantType= column[PowerPlantType]("powerPlantType")
def createdAt = column[DateTime]("createdAt")
def updatedAt = column[DateTime]("updatedAt")
def * = {
(id, orgName, isActive, minPower, maxPower,
powerRampRate, rampRateSecs, powerPlantType, createdAt, updatedAt) <>
(PowerPlantRow.tupled, PowerPlantRow.unapply)
}
}
I would like to go over the filter and populate dynamic query! Additionally, I would like to use a like statement in my resulting SQL for String types.
So in my case above the orgName in my PowerPlantFilter should be checked for existence and if yes, it should produce a like statement in the resulting SQL!
Here is my first attempt, but obviously this fails!
val q4 = all.filter { powerPlantTable =>
List(
criteriaPowerPlantType.map(powerPlantTable.powerPlantType === _),
criteriaOrgName.map(powerPlantTable.orgName like s"%${criteriaOrgName}%") // fails to compile here!
).collect({case Some(criteria) => criteria}).reduceLeftOption(_ && _)
}
Is there something built in Slick to do this?
This is what I arrived at and it works, but not sure if this is efficient:
def powerPlantsFor(criteriaPowerPlantType: Option[PowerPlantType], criteriaOrgName: Option[String], onlyActive: Boolean) = {
val query = for {
filtered <- all.filter(f =>
criteriaPowerPlantType.map(d =>
f.powerPlantType === d).getOrElse(slick.lifted.LiteralColumn(true)) &&
criteriaOrgName.map(a =>
f.orgName like s"%$a%").getOrElse(slick.lifted.LiteralColumn(true))
)
} yield filtered
query.filter(_.isActive === onlyActive)
}
But when I examine the generated SQL query, I see two statements being executed on the database whish is as below:
[debug] s.j.J.statement - Preparing statement: select `powerPlantId`, `orgName`, `isActive`, `minPower`, `maxPower`, `rampRate`, `rampRateSecs`, `powerPlantType`, `createdAt`, `updatedAt` from `powerPlant` where (true and (`orgName` like '%Organization-%')) and (`isActive` = true) limit 0,5
[debug] s.j.J.statement - Preparing statement: select `powerPlantId`, `orgName`, `isActive`, `minPower`, `maxPower`, `rampRate`, `rampRateSecs`, `powerPlantType`, `createdAt`, `updatedAt` from `powerPlant` where `isActive` = true
How do I optimize this?
def getAll(userid:BigInteger) = {
DB.withConnection { implicit Connection =>
val dat = SQL("select * from id_info_user where user_id=" + userid)
var data = dat().map(row =>
RecordAll(row[Int]("country"),row[Int]("age"),row[Int]("gender"),row[Int] ("school"),row[Int]("college"),row[Int]("specialization"),row[Int]("company"))).toList
data
}
}
Database contains six columns which have only zero or one value.
This give me the list of row values but i want only those values which are one.