BEGIN AND COMMIT MYSQL EXECUTE JAVA - mysql

I have this simple doubt:
String sql ="BEGIN;" +
"DELETE FROM users WHERE username='gg';" +
"DELETE FROM comprofiler WHERE id=611;" +
"COMMIT;";
st.execute(sql);
Why doesn't it work? It works if it's just one instruction, how can I type this?

Related

SQL parametric columns in ASP.NET

Why can you not use parameters in an SQL statement as the column name? I found that out after two hours of thinking what the problem could be. The only way it seemed possible was by doing it in a way it could be vulnerable to SQL injections (which for me wasn't a problem because the parameters are generated serverside).
This works:
string cmdgetValues = "SELECT " + column + " FROM user WHERE " + filterColumn + " = #filter";
MySqlCommand getValues = new MySqlCommand(cmdgetValues, connectionDB);
getValues.Parameters.AddWithValue("#filter", filterValue);
This doesn't work:
string cmdgetValues = "SELECT #column FROM user WHERE #filterColumn = #filter";
MySqlCommand getValues = new MySqlCommand(cmdgetValues, connectionDB);
getValues.Parameters.AddWithValue("#column", column);
getValues.Parameters.AddWithValue("#filterColumn", filterColumn);
getValues.Parameters.AddWithValue("#filter", filterValue);
Why is this? And is it intended?
Because select columns are fundamental query
You can't parameterise the fundamental query, so you have to build the query at the code.
If you want to decide the query columns runtime maybe you can try to use Prepared SQL Statement Syntax in Mysql.

Import Foxpro tables into SQL Server

We have 800 different .dbf files and these need to load into SQL Server with their file name as the new table name, so file1.dbf has to be loaded into SQL Server into table file1.
Like this, we need to load all 800 Foxpro tables into SQL Server. Does anyone have an idea for this, or a script? Any help is highly appreciated.
There are multiple solutions to the problem. One is to use the upsizing wizard that ships with VFP. I only tried the original version and it was not good at all. I didn't use it since then. You may try uploading a test database with a single table that has, say a million rows in it, just to see if using that would be feasible to use (a million rows shouldn't take more than a minute).
What I did was to create a "generator" that would create the SQL server tables in the mapping I wanted (ie: memo to varchar or varbinary MAX, char to varchar etc). Then using a C# based activex code I wrote, I load the tables - multiple tables at a time (other ways of loading the tables were extremely slow). Since then that code is used to create SQL server tables and\or transfer exiting customers' data to SQL server.
Yet another effective way would be, create a linked server to VFP using VFPOLEDB and then use OpenQuery to get tables' structure and data:
select * into [TableName]
from OpenQuery(vfpserver, 'select * from TableName ...')
This one is fast too and allows you to use VFP specific functions inside the query, however resulting field types might not be as you like.
Below is a solution that is written in FoxPro 9. You will probably need to modify a bit as I only handled 3 data types. You will also have to look out for SQL reserved words as field names.
SET SAFETY OFF
CLOSE ALL
CLEAR ALL
CLEAR
SET DIRE TO "C:\temp"
** house keeping
RUN del *.fxp
RUN del *.CDX
RUN del *.bak
RUN del *.err
RUN del *.txt
oPrgDir = SYS(5)+SYS(2003) && Program Directory
oPath = "C:\temp\pathtodbfs" && location of dbfs
CREATE TABLE dbfstruct (fldno N(7,0), fldnm c(16), fldtype c(20), fldlen N(5,0), fldpoint N(7,0)) && dbf structure table
STORE SQLSTRINGCONNECT("DRIVER={MySQL ODBC 3.51 Driver};SERVER=localhost;DATABASE=testdbf;UID=root;PWD=root; OPTION=3") TO oConn && SQL connection
SET DIRE TO (m.oPath)
STORE ADIR(aFL, "*.dbf") TO iFL && getting list of dbfs
SET DIRE TO (m.oPrgDir)
FOR i = 1 TO iFL
IF AT("dbfstruct.dbf", LOWER(aFL(i,1))) = 0 THEN
USE oPath + "\" + aFL(i,1)
LIST STRUCTURE TO FILE "struct.txt" && output dbf structure to text file"
SET DIRE TO (m.oPrgDir)
USE dbfstruct
ZAP
APPEND FROM "struct.txt" TYPE SDF
DELETE FROM dbfstruct WHERE fldno = 0 && removing non esential text
PACK
CLEAR
DELETE FILE "struct.txt"
SET DIRE TO (m.oPrgDir)
=SQLEXEC(oConn, "DROP TABLE IF EXISTS testdbf." + STRTRAN(LOWER(aFL(i,1)),".dbf", "")) && needed to remove tables already created when I was testing
sSQL = "CREATE TABLE testdbf." + STRTRAN(LOWER(aFL(i,1)),".dbf", "") + " ("
SELECT dbfstruct
GOTO TOP
DO WHILE NOT EOF()
#1,1 SAY "CREATING QUERY: " + aFL(i,1)
sSQL = sSQL + ALLTRIM(LOWER(dbfstruct.fldnm)) + " "
* You may have to add below depending on the field types of your DBFS
DO CASE
CASE ALLTRIM(dbfstruct.fldtype) == "Character"
sSQL = sSQL + "VARCHAR(" + ALLTRIM(STR(dbfstruct.fldlen)) + "),"
CASE ALLTRIM(dbfstruct.fldtype) == "Numeric" AND dbfstruct.fldpoint = 0
sSQL = sSQL + "INT(" + ALLTRIM(STR(dbfstruct.fldlen)) + "),"
CASE ALLTRIM(dbfstruct.fldtype) == "Numeric" AND dbfstruct.fldpoint > 0
sSQL = sSQL + "DECIMAL(" + ALLTRIM(STR(dbfstruct.fldlen)) + "),"
OTHERWISE
=MESSAGEBOX("Unhandled Field Type: " + ALLTRIM(dbfstruct.fldtype) ,0,"ERROR")
CANCEL
ENDCASE
SELECT dbfstruct
SKIP
ENDDO
sSQL = SUBSTR(sSQL, 1, LEN(sSQL)-1) + ")"
STORE SQLEXEC(oConn, sSQL) TO iSQL
IF iSQL < 0 THEN
CLEAR
?sSQL
STORE FCREATE("sqlerror.txt") TO gnOut && SQL of query in case it errors
=FPUTS(gnOut, sSQL)
=FCLOSE(gnOut)
=MESSAGEBOX("Error creating table on MySQL",0,"ERROR")
CANCEL
ENDIF
CLOSE DATABASES
ENDIF
ENDFOR
=SQLDISCONNECT(oConn)
SET DIRE TO (m.oPrgDir)
SET SAFETY ON

Before Insert trigger in Derby DB

I am building a javafx project using netbeans and am using derby database for that.
I have two tables, BOOK and ISSUE.
I want to create a BEFORE INSERT trigger ON ISSUE table which checks for the value of a boolean field called "available" in the BOOK table.
A button called BookIssue when pressed, should check the following trigger.
If the value of "available" is false, a warning message should pop up, else the insert operation must be executed.
I am not able to get the trigger command right.
String trigger = "CREATE TRIGGER toIssue NO CASCADE BEFORE INSERT ON ISSUE"
+ " FOR EACH ROW"
+ " BEGIN"
+ " SELECT isAvail from BOOK WHERE id = '"+ bookId + "'"
+ " IF isAvail = false"
+ " THEN RAISE_APPLICATION_ERROR(-20001,'Books Out of stock')"
+ " END IF"
+ " END";
databasehandler.execQuery(trigger);
I am getting following exception in my log:
Exception at execQuery:dataHandlerSyntax error: Encountered "BEGIN" at line 1, column 71.
Could someone help me with this! I couldn't find any other place which had a similar issue with derby.
I think you want: ExecuteNonQuery().
In addition, this line is suspect:
SELECT isAvail from BOOK WHERE id = '"+ bookId + "'"
The trigger body should look more like this:
DECLARE v_avail boolean;
SELECT b.isAvail INTO v_avail
FROM BOOK b
WHERE b.id = NEW.ID;
IF NOT v_isAvail THEN
RAISE_APPLICATION_ERROR(-20001,'Books Out of stock')"
END IF;

Spark ETL job execute mysql only once

I have an ETL job in Spark that also connects to MySQL in order to grab some data. Historically, I've been doing it as follows:
hiveContext.read().jdbc(
dbProperties.getProperty("myDbInfo"),
"(SELECT id, name FROM users) r",
new Properties()).registerTempTable("tmp_users");
Row[] res = hiveContext.sql("SELECT "
+ " u.name, "
+ " SUM(s.revenue) AS revenue "
+ "FROM "
+ " stats s "
+ " INNER JOIN tmp_users u "
+ " ON u.id = s.user_id
+ "GROUP BY "
+ " u.name "
+ "ORDER BY "
+ " revenue DESC
+ "LIMIT 10").collect();
String ids = "";
// now grab me some info for users that are in tmp_user_stats
for (i = 0; i < res.length; i++) {
s += (!s.equals("") ? "," : "") + res[i](0);
}
hiveContext.jdbc(
dbProperties.getProperty("myDbInfo"),
"(SELECT name, surname, home_address FROM users WHERE id IN ("+ids+")) r",
new Properties()).registerTempTable("tmp_users_prises");
However, when scaling this to multiple worker nodes, whenever I use the tmp_users table, it runs the query and it gets executed (at least) once per node, which boils down to our db admin running around offices with a knife.
What's the best way to handle this? Can I run the job on like 3 machines, limiting it to 3 queries and then write the data to Hadoop for other nodes to use it or what?
Essentially - as suggested in comments - I could run a query outside of the ETL job which can prepare data from MySQL side and import it to Hadoop. However, there could be subsequent queries, which suggest a solution more in line with Spark and JDBC connection setup.
I'll accept the Sqoop solution as it at least give a more streamlined solution, although I'm still not yet sure it will do the job. If I find something, I'll edit the question again.
You can cache data:
val initialDF = hiveContext.read().jdbc(
dbProperties.getProperty("myDbInfo"),
"(SELECT id, name FROM users) r",
new Properties())
initialDF.cache();
initialDF.registerTempTable("tmp_users");
After first read, data will be cached in memory
Alternative (that doesn't hurt DBA ;) ) is to use Sqoop with parameter --num-mappers=3 and then import result file to Spark

MySQL Statement error in JSP

I have an issue with an sql statement and i dont know how to handle it. Here is the problem:
query = "INSERT INTO `mmr`(`userID`, `RunningProjects`, `MainOrders`) VALUES ("
+ session.getAttribute("id")
+ ",'"
+ request.getParameter("RunningProjects")
+ "','"
+ request.getParameter("MainOrders")')";
The values are obtained from the post form which contains free text. The problem is, whenever a user enters characters like ', i will get an error because that tells the compiler that the value is over here(i suppose) and now look for the next value. I don't know how to include these characters and send them to database without having an error. Any help would be appreciated. Thank you.
The character ' is used to surround literals in MySQL. And if any data contains such character as part of it, we have to escape it. This can be done using Prepared Statement in Java.
Change your query code accordingly.
query = "INSERT INTO `mmr`(`userID`, `RunningProjects`, `MainOrders`)
VALUES ( ?, ?,? )";
Now define a PreparedStatement instance and use it to bind values.
PreparedStatement pst = con.prepareStatement( query );
pst.setString( 1, session.getAttribute("id") );
pst.setString( 2, request.getParameter("RunningProjects") );
pst.setString( 3, request.getParameter("MainOrders") );
int result = pst.executeUpdate();
And, I suggest use of beans to handle business logic.
change
query = "INSERT INTO `mmr`(`userID`, `RunningProjects`, `MainOrders`) VALUES ("
+ session.getAttribute("id")
+ ",'"
+ request.getParameter("RunningProjects")
+ "','"
+ request.getParameter("MainOrders")
+ "')";
I think you are using normal statement in your JDBC code. Instead, I would suggest you to use Prepared statement. Prepared statement is generally used to eliminate this kind of problem and caching issue. If you will use prepared statement I think your problem will be solved