Parse SQL query before it goes to MySQL - mysql

In my Go app I want to be able to analyze a SQL query before to execute it.
I want to get:
type (update, insert, delete etc). This is easy, but next steps not.
table to be affected,
columns to be updated (on insert/update)
most important - condition, list of columns and values.
Is there any go library for this?
Something to pass a sql query and get back some structure with info about this query

Yes, you have sqlparser for golang.
Note that the sqlparser is been pulled out from the database clustering system vitess
You can use the sql parser like,
reader := strings.NewReader("INSERT INTO table1 VALUES (1, 'a');")
tokens := sqlparser.NewTokenizer(reader)
for {
stmt, err := sqlparser.ParseNext(tokens)
if err == io.EOF {
break
}
// Do your logics with the statements.
}

Related

How to switch between databases using GORM in golang?

I'm new to GORM in golang. I'm stuck at a point. Generally we select the database like this:
DBGorm, err = gorm.Open("mysql", user:password#tcp(host:port)/db_name)
But my problem is I'll get the 'db_name' in the request, which means I don't know which db_name might come and I'll have to query according to that db_name.
So now, I'll create the database pointer in the init function like this:
DBGorm, err = gorm.Open("mysql", user:password#tcp(host:port)/) which is without the db_name.
Now how will I switch to db_name coming to me in request. Because when I try to do DBGorm.Create(&con), it shows No database selected.
If I use 'database/sql', then I can make raw queries like this: "SELECT * FROM db_name.table_name", this can solve my problem. But how to do this in gorm?
You can explicitly specify db_name and table_name using .Table() when doing query or other operation on table.
DBGorm.Table("db_name.table_name").Create(&con)
I saw a related article on Github. https://github.com/go-sql-driver/mysql/issues/173#issuecomment-427721651
All you need to do is
start a transaction,
set you database
run your desired query.
and switch back to your desired DB
commit once you are done.
below is an example
tx := db.Begin() // start transaction
tx.Exec("use " + userDB) // switch to tenant db
tx.Exec("insert into ....") // do some work
tx.Exec("use `no-op-db`") // switch away from tenant db (there is no unuse, so I just use a dummy)
tx.Commit() // end transaction

Performing an SQL "dry run" (from Go application)

I have a tool in Go which generates SQL scripts from a YAML file. To minimize the risk that the generated scripts will fail I'd like to do something like a "dry run", either by means of SQL or the Go application.
My first thought was using the ROLLBACK statement but then the generated script would also include a rollback instead of a commit.
Does SQL or Go provide something like this?
Have you considered running the 'Dry Run' statements inside of a transaction, provided by the *DB.Conn?
db, err = sql.Open(...)
txn, err = db.BeginTx(...)
defer txn.Rollback
rows, err = txn.Query(...)

DBGrid / DataSet fails to sort as per sql statement set in CommandText

I'm using the following in my CommandText property of the DataSet I'm using:
SELECT *
FROM table_name
ORDER BY FIELD(priority, 'urgent', 'normal'),
FIELD(state, 'wait', 'executed', 'done')
It should sort the data I'm displaying in the DBGrid connected to this DataSet, like this:
Rows containing urgent in the priority column should start the DBGrid list.
Then the list should continue with the ones marked as normal in the priority column,
followed by the ones marked as wait in the state column,
followed by the ones marked as executed in the state column,
and finally the list ends with the ones marked as done in the state column.
But It doesn't, well actually it kind of does, but it's actually backwards.
Here is a quick video I've made to show you whats happening, maybe you can get a clearer view this way:
Video of what's happening
I'm guessing it's because of either the ID column I'm using or the Date column but if so, I have no idea how and why.
This is how those 2 columns look like/are set up:
ID column is set as Primary and Unique and Auto_Increment - that's it, no Index or any of the other options
If it's not those 2 columns the problem, then maybe the DBGrid?
I'm using RAD Studio 10 Seattle, dbExpress components (TSimpleDataSet, etc) and MySQL db
Any thoughts on how to fix this? thanks!
You are making life unnecessarily difficult for yourself going about it the way you are.
It's not necessary to get the server to do the sorting (by using an ORDER BY clause and it's arguably better to do the sorting in the client rather than on the server, because the client typically has computing power to spare whereas the server may not.
So, this is my suggested way of going about it:
Drop the ORDER BY from your SQL and just do a a SELECT * [...].
Replace your SimpleDataSet by a ClientDataSet and define persistent TFields on it. The reason for making this change is so as to be able to create two persistent fields of type fkInternalCalc.
In the TFields editor in the Object Inspector, define two fkInternalCalc fields called something like PriorityCode and StateCode.
Set the IndexFieldNames property of your dataset to 'PriorityCode;StateCode'.
In the OnCalcFields event of your dataset, calculate values for the PriorityCode and StateCode that will give the sort order you wish the data rows to have.
Something like:
procedure TForm1.ClientDataSet1CalcFields(DataSet: TDataSet);
var
S : String;
PriorityCodeField,
StateCodeField : TField;
iValue : Integer;
begin
PriorityCodeField := ClientDataset1.FieldByName('PriorityCode');
StateCodeField := ClientDataset1.FieldByName('StateCode');
S := ClientDataset1.FieldByName('Priority').AsString;
if S = 'urgent' then
iValue := 1
else
if S = 'normal' then
iValue := 2
else
iValue := 999;
PriorityCodeField.AsInteger := iValue;
S := ClientDataset1.FieldByName('State').AsString;
if S = 'wait' then
iValue := 1
else
if S = 'executed' then
iValue := 2
else
if S = 'done' then
iValue := 3
else
iValue := 999;
StateCodeField.AsInteger := iValue;
end;
Actually, it would be better (faster, less overhead) if you avoid using FieldByName and just use the fields that the Fields that the OI's Tfields editor creates, since these will be automatically bound to the ClientDataSet's data fields when it is opened.
Btw, it's useful to bear in mind that although a TClientDataSet cannot be sorted on a field defined in the TFields editor as Calculated, it can be sorted on an InternalCalc field.

Inserting in MySQL Table

I am trying to insert data into mysql table through mysql C client, through the step written below.
The command is of the Form : (A variable string generated at run time)
INSERT INTO department values('Statistics','Taylor',395051.74)
which is correct for MySQL.
if (mysql_query(con, command))
{
printf("Done\n");
}
printf("\n%s\n",command);
But my database shows no change. No rows get inserted, is there any way the above steps could not work?
Note that mysql_query returns a zero if it is successful, and an error code if it's unsucessful MySQL Docs. I think you might be treating it backward. So I think it's issuing an error you're not catching.
As a guess of what might be wrong, try telling it what columns you're inserting into:
INSERT INTO department (`column1`,`column2`,`column3`)
values ('Statistics','Taylor',395051.74)

How do I pass a []slice to an IN-condition in a prepared SQL statement with non-IN-conditions as well?

Imagine you have the following SQL query:
SELECT *
FROM foo
WHERE type = ?
AND subtype IN (?)
And you have the following possible data (we imagine that a user interface can set these data):
var Type int
var SubTypes []int
In the case of SubTypes, we are talking about a multiple choice selection.
Now, the following code won't work:
rows, err := sqldb.Query(`SELECT *
FROM foo
WHERE type = ?
AND subtype IN (?)`, Type, SubTypes)
Because the driver (at least the mysql driver used in this example) doesn't recognise a []slice. Typing to explode it (SubTypes...) doesn't work either, because A) you cannot have more than one exploded parameter and B) even if you could, your SQL only supports a single item ((?)).
However, there is a solution. First of all, since we can only have a single exploding parameter and no others, we should first put together our parameters in a single []slice:
var params []interface{}
params = append(params, Type)
for _, subtype := range SubTypes {
params = append(params, SubTypes)
}
Since the SQL will not expand on its own, let's expand that loop:
var params []interface{}
params = append(params, Type)
inCondition := ""
for _, subtype := range SubTypes {
params = append(params, SubTypes)
if inCondition != "" {
inCondition += ", "
}
inCondition += "?"
}
Assuming SubTypes contains []int{1,2,3}, inCondition should now contain ?, ?, ?.
We then combine that to our SQL statement and explode the argument:
sqlstr := fmt.Sprintf(`SELECT *
FROM foo
WHERE type = ?
AND subtype IN (%s)`, inCodition)
rows, err := sqldb.Query(sqlstr, params...)
Of course, it would be pretty cool, if you could simply pass []slices to your prepared statements, and the automatically expanded. But that might give some unexpected results if you are dealing with more 'unknown' data.
Prepared statements do not work that way, at least not in major DBMS I know. I mean, in Go, the support for prepared statements implemented by database/sql drivers is supposed to use the corresponding facility provided by the underlying DBMS (a driver might opt to simulate such support if it's not provided by the DB engine it interfaces with).
Now in all the DBMS-s I'm familiar with, the whole idea of prepared statement is that it's processed once by the DB engine and cached; "processed" here means syntax checking, compiling into some DB-specific internal representation and its execution plan figured out. As follows from the term "compiled", the statement's text is processed exactly once, and then each call to the prepared statement just essentially tells the server "here is the ID of that prepared statement I supplied you earlier, and here's the list of actual parameters to use for placeholders it contained". It's like compiling a Go program and then calling it several times in a row with different command-line flags.
So the solution you have come up with is correct: if you want to mess with the statement text between invocation then by all means use client-side text manipulations1 but do not attempt to use the result of it as a prepared statement unless you really intend to execute the resulting text more than once.
And to be may be more clear: your initial attempt to prepare something like
SELECT a, b FROM foo WHERE a IN (?)
supposedly fails at your attempt to supply a set of values for that IN (?) placeholder because commas which would be required there to specify several values are syntax, not parts of the value.
I think it should still be fine to prepare something like
SELECT a, b FROM foo WHERE a IN (?, ?, ?)
because it does not break that rule. Not that it's a solution for you…
See also this and this — studying the latter would allow you to play with prepared statements directly in the MySQL client.
1 Some engines provide for server-side SQL generation with subsequent execution of the generated text.