I have a mysql test.sql file that contains stored procedure.
to load this sql file after connecting to database in go server, I used Exec command. But I haven't got the result I wanted. I take error code
1064 You have an error in your SQL syntax; check the manual that corr...
How can I load stored procedure from a sql file after connecting the database in go.
// go code section :
sqlProc, err := ioutil.ReadFile("E:/Qlass/goserv/src/cevir/test.sql")
// handle error
_, err = MAPP.DB.Db.Exec(string(sqlProc[:]))
// handle error
// content of test.sql
drop procedure if exists Test;
delimiter ;;
create procedure Test()
begin
truncate table _prlog;
end ;;
delimiter ;
problem is caused by the delimeter command. I removed those lines. Problem is solved. The corrected sql file.
drop procedure if exists Test;
create procedure Test()
begin
truncate table _prlog;
end ;
I would expect something more like this:
DB, err = sql.Open("mysql", MAPP.CF.Mysql)
if err != nil {
// handle error
}
data, err := ioutil.ReadFile(`E:/Qlass/goserv/src/modul/modul_sp.sql`)
if err != nil {
// handle error
}
sqlProc := string(data)
_, err := DB.Exec(sqlProc)
if err != nil {
// handle error
}
Related
i created a function in MariaDB which its body is:
BEGIN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'MY_ERROR_MESSAGE';
RETURN 1;
END
Then in Golang, I tried to select this function, which must be done with an error.
dbResult := db.QueryRow("SELECT `function1`()")
if dbResult.Err() != nil {
fmt.Printf("error (%s)", dbResult.Err().Error())
}
but the problem is that Err() is nil !!!!!!!!!! What is the problem !?
and fmt.Printf is not executed.
The funny part is that i created a procedure this time
BEGIN
SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'MY_ERROR_MESSAGE';
END
and i used the same golang code to call it
dbResult := db.QueryRow("CALL `procedure1`()")
if dbResult.Err() != nil {
fmt.Printf("error (%s)", dbResult.Err().Error())
}
but this time everything is OK !!! and .Err is not nil any more ! Why ???? What is the problem
I wrote code that behaves weird and slow and I can't understand why.
What I'm trying to do is to download data from bigquery (using a query as an input) to a CSV file, then create a url link with this CSV so people can download it as a report.
I'm trying to optimize the process of writing the CSV as it takes some time and have some weird behavior.
The code iterates over bigquery results and pass each result to a channel for future parsing/writing using golang encoding/csv package.
This is the relevant parts with some debugging
func (s *Service) generateReportWorker(ctx context.Context, query, reportName string) error {
it, err := s.bigqueryClient.Read(ctx, query)
if err != nil {
return err
}
filename := generateReportFilename(reportName)
gcsObj := s.gcsClient.Bucket(s.config.GcsBucket).Object(filename)
wc := gcsObj.NewWriter(ctx)
wc.ContentType = "text/csv"
wc.ContentDisposition = "attachment"
csvWriter := csv.NewWriter(wc)
var doneCount uint64
go backgroundTimer(ctx, it.TotalRows, &doneCount)
rowJobs := make(chan []bigquery.Value, it.TotalRows)
workers := 10
wg := sync.WaitGroup{}
wg.Add(workers)
// start wrokers pool
for i := 0; i < workers; i++ {
go func(c context.Context, num int) {
defer wg.Done()
for row := range rowJobs {
records := make([]string, len(row))
for j, r := range records {
records[j] = fmt.Sprintf("%v", r)
}
s.mu.Lock()
start := time.Now()
if err := csvWriter.Write(records); err != {
log.Errorf("Error writing row: %v", err)
}
if time.Since(start) > time.Second {
fmt.Printf("worker %d took %v\n", num, time.Since(start))
}
s.mu.Unlock()
atomic.AddUint64(&doneCount, 1)
}
}(ctx, i)
}
// read results from bigquery and add to the pool
for {
var row []bigquery.Value
if err := it.Next(&row); err != nil {
if err == iterator.Done || err == context.DeadlineExceeded {
break
}
log.Errorf("Error loading next row from BQ: %v", err)
}
rowJobs <- row
}
fmt.Println("***done loop!***")
close(rowJobs)
wg.Wait()
csvWriter.Flush()
wc.Close()
url := fmt.Sprintf("%s/%s/%s", s.config.BaseURL s.config.GcsBucket, filename)
/// ....
}
func backgroundTimer(ctx context.Context, total uint64, done *uint64) {
ticker := time.NewTicker(10 * time.Second)
go func() {
for {
select {
case <-ctx.Done():
ticker.Stop()
return
case _ = <-ticker.C:
fmt.Printf("progress (%d,%d)\n", atomic.LoadUint64(done), total)
}
}
}()
}
bigquery Read func
func (c *Client) Read(ctx context.Context, query string) (*bigquery.RowIterator, error) {
job, err := c.bigqueryClient.Query(query).Run(ctx)
if err != nil {
return nil, err
}
it, err := job.Read(ctx)
if err != nil {
return nil, err
}
return it, nil
}
I run this code with query that have about 400,000 rows. the query itself take around 10 seconds, but the whole process takes around 2 minutes
The output:
progress (112346,392565)
progress (123631,392565)
***done loop!***
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
progress (123631,392565)
worker 3 took 1m16.728143875s
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
progress (247525,392565)
worker 3 took 1m13.525662666s
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
progress (370737,392565)
worker 4 took 1m17.576536375s
progress (392565,392565)
You can see that writing first 112346 rows was fast, then for some reason worker 3 took 1.16minutes (!!!) to write a single row, which cause the other workers to wait for the mutex to be released, and this happened again 2 more times, which caused the whole process to take more than 2 minutes to finish.
I'm not sure whats going and how can I debug this further, why I have this stalls in the execution?
As suggested by #serge-v, you can write all the records to a local file and then transfer the file as a whole to GCS. To make the process happen in a shorter time span you can split the files into multiple chunks and can use this command : gsutil -m cp -j where
gsutil is used to access cloud storage from command line
-m is used to perform a parallel multi-threaded/multi-processing copy
cp is used to copy files
-j applies gzip transport encoding to any file upload. This also saves network bandwidth while leaving the data uncompressed in Cloud Storage.
To apply this command in your go Program you can refer to this Github link.
You could try implementing profiling in your Go program. Profiling will help you analyze the complexity. You can also find the time consumption in the program through profiling.
Since you are reading millions of rows from BigQuery you can try using the BigQuery Storage API. It Provides faster access to BigQuery-managed Storage than Bulk data export. Using BigQuery Storage API rather than the iterators that you are using in Go program can make the process faster.
For more reference you can also look into the Query Optimization techniques provided by BigQuery.
I created a MySQL database, with some stored procedures. Using MySQL Workbench the SP forks fine, and now I need to launch them using a c program.
I created the program, which connect successfully to my db, and I'm able to launch procedures which not requires parameters.
To launch more complex procedure I need to use the prepare statement in c: in particular, I want to call the procedure esame_cancella(IN code CHAR(5)) which deletes a selected row of the table 'esame'.
int status;
MYSQL_RES *result;
MYSQL_ROW row;
MYSQL_FIELD *field;
MYSQL_RES *rs_metadata;
MYSQL_STMT *stmt;
MYSQL_BIND ps_params[6];
unsigned long length[6];
char cod[64];
printf("Codice: ");
scanf ("%s",cod);
length[0] = strlen(cod);
stmt = mysql_stmt_init(conn);
if (stmt == NULL) {
printf("Could not initialize statement\n");
exit(1);
}
status = mysql_stmt_prepare(stmt, "call esame_cancella(?) ", 64);
test_stmt_error(stmt, status); //line which gives me the syntax error
memset(ps_params, 0, sizeof(ps_params));
ps_params[0].buffer_type = MYSQL_TYPE_VAR_STRING;
ps_params[0].buffer = cod;
ps_params[0].buffer_length = 64;
ps_params[0].length = &length[0];
ps_params[0].is_null = 0;
// bind parameters
status = mysql_stmt_bind_param(stmt, ps_params); //muore qui
test_stmt_error(stmt, status);
// Run the stored procedure
status = mysql_stmt_execute(stmt);
test_stmt_error(stmt, status);
}
I use test_stmt_error to see MySQL log calling procedures.
static void test_stmt_error(MYSQL_STMT * stmt, int status)
{
if (status) {
fprintf(stderr, "Error: %s (errno: %d)\n",
mysql_stmt_error(stmt), mysql_stmt_errno(stmt));
exit(1);
}
}
when I compile, and launch my program, I have the following log:
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1 (errno: 1064)
Any help?
It looks like the string length being passed to mysql_stmt_prepare is wrong - try changing the 64 to 24.
Or better yet, try something like:
const char sql_sp[] = "call esame_cancella(?) ";
...
status = mysql_stmt_prepare(stmt, sql_sp, sizeof(sql_sp));
so I have the following test Go code which is designed to read from a binary file through stdin, and send the data read to a channel, (where it would then be processed further). In the version I've given here, it only reads the first two values from stdin, although that's fine as far as showing the problem is concerned.
package main
import (
"fmt"
"io"
"os"
)
func input(dc chan []byte) {
data := make([]byte, 2)
var err error
var n int
for err != io.EOF {
n, err = os.Stdin.Read(data)
if n > 0 {
dc <- data[0:n]
}
}
}
func main() {
dc := make(chan []byte, 1)
go input(dc)
fmt.Println(<-dc)
}
To test it, I first build it using go build, and then send data to it using the command-
./inputtest < data.bin
The data I am using currently to test is just random binary data created using the openssl command.
The problem I am having is that it misses the first values from Stdin, and only gives the second and greater values. I think this is to do with the channel, as the same script with the channel removed produces the correct data. Has anyone come across this before? For example, I get the following output when running this command-
./inputtest < data.bin
[36 181]
Whereas I should be getting-
./inputtest < data.bin
[72 218]
(The binary data is the same in both instances.)
You're overwriting your buffer on every read and you've got a channel buffer, so you'll lose data every time there's space in the channel.
Try something like this (not tested, written on tablet, etc...):
import "os"
func input(dc chan []byte) error {
defer close(dc)
for {
data := make([]byte, 2)
n, err := os.Stdin.Read(data)
if n > 0 {
dc <- data[0:n]
}
if err != nil {
return err
}
}
return nil
}
I have a very good DirectMySQL unit, which is ready to be used and i want it to be a TDataset descendant so i can use it with QuickReport, i just want MySQL Query with DirectMySQL which descendant from TDataset.
Everything was ok until i tried to access a big table with 10.000 rows and more. It was unstable, the error was unpredictable and not always shown but it likely happened after you played with other tables.
It happened in GetFieldData(Field: TField; Buffer: Pointer): boolean; which used to get the field value from MySQL rows.
Here's the code,
function TMySQLQuery.GetFieldData(Field: TField; Buffer: Pointer): Boolean;
var
I, CT: Integer;
Row: TMySQL_Row;
TBuf: PChar;
FD: PMySQL_FieldDef;
begin
UpdateCursorPos; ------------> This code is after i got the error but no result
Resync([]); ------------> This code is after i got the error but no result
Result := false;
Row := oRecordset.CurrentRow;
I := Field.FieldNo-1;
FD := oRecordset.FieldDef(I);
if Not Assigned(FD) then
FD := oRecordset.FieldDef(I);
TBuf := PP(Row)[i];
Try
CT := MySQLWriteFieldData(fd.field_type, fd.length, fd.decimals, TBuf, PChar(Buffer));
Result := Buffer <> nil;
Finally
Row := nil; ------------> This code is after i got the error but no result
FD := nil; ------------> This code is after i got the error but no result
TBuf := nil; ------------> This code is after i got the error but no result
Buffer := nil; ------------> This code is after i got the error but no result
End;
end;
{
These codes below are to translate the data type
from MySQL Data type to a TDataset data type
and move mysql row (TBuf) to TDataset buffer to display.
And error always comes up from this function
when moving mysql row to buffer.
}
function TMySQLQuery.MySQLWriteFieldData(AType: byte;
ASize: Integer; ADec: cardinal; Source, Dest: PChar): Integer;
var
VI: Integer;
VF: Double;
VD: TDateTime;
begin
Result := MySQLDataSize(AType, ASize, ADec);
case AType of
FIELD_TYPE_TINY, FIELD_TYPE_SHORT, FIELD_TYPE_LONG, FIELD_TYPE_LONGLONG,
FIELD_TYPE_INT24:
begin
if Source <> '' then
VI := StrToInt(Source)
else
VI := 0;
Move(VI, Dest^, Result);
end;
FIELD_TYPE_DECIMAL, FIELD_TYPE_NEWDECIMAL:
begin
if source <> '' then
VF := internalStrToCurr(Source)
else
VF := 0;
Move(VF, Dest^, Result);
end;
FIELD_TYPE_FLOAT, FIELD_TYPE_DOUBLE:
begin
if Source <> '' then
VF := InternalStrToFloat(Source)
else
VF := 0;
Move(VF, Dest^, Result);
end;
FIELD_TYPE_TIMESTAMP:
begin
if Source <> '' then
VD := InternalStrToTimeStamp(Source)
else
VD := 0;
Move(VD, Dest^, Result);
end;
FIELD_TYPE_DATETIME:
begin
if Source <> '' then
VD := InternalStrToDateTime(Source)
else
VD := 0;
Move(VD, Dest^, Result);
end;
FIELD_TYPE_DATE:
begin
if Source <> '' then
VD := InternalStrToDate(Source)
else
VD := 0;
Move(VD, Dest^, Result);
end;
FIELD_TYPE_TIME:
begin
if Source <> '' then
VD := InternalStrToTime(Source)
else
VD := 0;
Move(VD, Dest^, Result);
end;
FIELD_TYPE_STRING, FIELD_TYPE_VAR_STRING,
FIELD_TYPE_ENUM, FIELD_TYPE_SET:
begin
if Source = nil then
Dest^ := #0
else
Move(Source^, Dest^, Result);
end;
Else
Result := 0;
Raise EMySQLError.Create( 'Write field data - Unknown type field' );
end;
end;
My guess for now is it's memory related problem.
I am stacked. Anyone could help?
I also need TDataset documentation which list availlable descendant function and how to use it, or how to descendant from TDataset. anyone have them? I am lack of this kind of doumentation.
GetFieldData cannot have UpdateCursorPos and Resync calls. Otherwise you may get unpredicatable errors.
FD := oRecordset.FieldDef(I) ... FD := oRecordset.FieldDef(I); - looks strange. Second assigment is not needed.
finally ... end with local variables reset is not needed.
I have no idea what returns MySQLDataSize. For example, MySQLDataSize may return size in Delphi data type representation units, or may return length of data returned by MySQL. But depending on that MySQLWriteFieldData may be correct or may be not.
I dont know how DirectMySQL works. If it uses raw TCP/IP to talk to MySQL, then the problem may be there. For example, it incorrectly handles a sequence of packets.
And finally - what are the errors you are getting ? What is your Delphi version ? What is your MySQL client and server versions ?
And so on ....
IOW, that will be really hard to say, what is wrong. To do so, I for example, will need to get all sources, sit at Delphi IDE debugger and analyze many details of what is going on - sorry, no time :)
It's solved now by adding #0 at the end of the line... Thanks so much to all who replied to my problem.