I am writing some non-web app helper, and came across a need for a synchronous query call.
Basically, within a loop I need to check the database to see if the value exists. If it doesn't then insert the value. Currently, with node-mysql I can only get it to work with a callback. Because of that, node.js treats the call as asynchronous and keeps processing my request before the query is finished. This is a big issue because in the end it could be inserting duplicates because they were in the queue.
Ideal Solution - doesn't work. Results is actually the object of client, and I can't find the actual results within. However this does make it synchronous.
results = client.query('SELECT COUNT(md5) as md5Count FROM table WHERE md5 = "' + md5 + '"')
The following does not work. Node.js treats it as asynchronous, and outerResult is still the object of client.
outerResult = client.query('SELECT COUNT(md5) as md5Count FROM board WHERE md5 = "' + md5 + '"', function selectCb(err, results, fields) {console.log(results);});
Any help is appreciated.
Basically, within a loop I need to check the database to see if the value exists. If it doesn't then insert the value.
This is a problem best served with SQL. You don't solve this problem by talking to the database repeatedly, you solve this problem by having SQL only insert where the index value doesn't already exist.
INSERT INTO mytable ( name, address )
SELECT #name, #address FROM DUAL
WHERE NOT EXISTS (SELECT * FROM mytable WHERE name = #name, address = #address)
This is a super simplified example, and not the most optimized. You can do the same thing here with sets of data, instead of record by record, if you like.
Basically, within a loop I need to
check the database to see if the value
exists. If it doesn't then insert the
value. Currently, with node-mysql I
can only get it to work with a
callback. Because of that, node.js
treats the call as asynchronous and
keeps processing my request before the
query is finished. This is a big issue
because in the end it could be
inserting duplicates because they were
in the queue.
There is an asynchronous solution, there always is.
basically your worried that duplicate entries could be entered.
I presume you have an array of data to loop through. Your problem is solved with _.uniq or some other filter solution.
So you simply call _.uniq(md5s).forEach(function() { })
Related
Let's suppose I have a set of integers of a variable length. I apply a function on this set of integers and I obtain a result.
myFunction(setOfIntegers) => myResult
Let's suppose a call to myFunction is very expensive and I would like to somehow store the results of this function calls.
In my application I am already using MySQL and what I was thinking was to somehow create a table with the setOfIntegers as a PK and myResult as an additional field.
I was thinking that I could do this by transforming the setOfIntegers to a string before storing it in the DB.
Can this be done in any other way? Or would there be a better way to store results of such function calls in order to avoid calling them a 2nd time with the same set of integers?
I don't know about Java, but Perl has my $str = join(',', $array) and PHP has $str = implode(',', $array). Then the string $str could be used as the PRIMARY KEY (assuming it is not too long). And the result would go in the other column.
Your app code (in Java) would need to first do an implode and SELECT to see if the function has already been evaluated for the given array. If not, then perform the function and end by INSERTing a new row.
If this will be multi-threaded, you could use INSERT IGNORE to deal with dups. (There are other solutions, too.)
Another note: If your set-of-integers is ordered, then what I describe is 'complete'. If it is unordered, then sort it before imploding. This will provide a canonical representation.
If the function can be implemented in MySQL directly, I would suggest using Views.
https://www.mysqltutorial.org/mysql-views-tutorial.aspx/
I have the following code attempting to truncate a table. The Joomla documentation makes me believe this will work, but it does not. What am I missing?
$db = JFactory::getDbo();
truncate_query = $db->getQuery(true);
//$truncate_query = 'TRUNCATE ' . $db->quoteName('#__mytable');
$truncate_query->truncateTable($db->quoteName('#__mytable'));
$db->setQuery($truncate_query);
echo $truncate_query;
exit();
If I use the line that is commented out to manually generate the SQL, it does work. The reason I am still looking to use the truncateTable function is that I am trying to include the truncation in a transaction. When I use the manual statement, the table is still truncated even if another part of the transaction fails, which is annoying since the other statements rely on data that is truncated, so if the table is emptied when it shouldn't be there is no data left to run the transaction again. Very annoying!
Here's how you call/execute your truncation query:
JFactory::getDbo()->truncateTable('#__mytable');
And now some more details...
Here is the method's code block in the Joomla source code:
public function truncateTable($table)
{
$this->setQuery('TRUNCATE TABLE ' . $this->quoteName($table));
$this->execute();
}
As you can see the truncateTable() method expects a tablename as a string for its sole parameter; you are offering a backtick-wrapped string -- but the method already offers the backtick-wrapping service. (Even if you strip your backticks off, your approach will not be successful.)
The setQuery() and execute() calls are already inside the method, so you don't need to create a new query object nor execute anything manually.
There is no return in the method, so the default null is returned -- ergo, your $truncate_query becomes null. When you try to execute(null), you get nothing -- not even an error message.
If you want to know how many rows were removed, you will need to run a SELECT query before hand to count the rows.
If you want to be sure that there are no remaining rows of data, you'll need to call a SELECT and check for zero rows of data.
Here is my answer (with different wording) on your JSX question.
JDBC allows us to fetch the value of a primary key that is automatically generated by the database (e.g. IDENTITY, AUTO_INCREMENT) using the following syntax:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?)",
Statement.RETURN_GENERATED_KEYS
);
while (resultSet.next()) {
LOGGER.info("Generated identifier: {}", resultSet.getLong(1));
}
I'm interested if the Oracle, SQL Server, postgresQL, or MySQL driver uses a separate round trip to fetch the identifier, or there is a single round trip which executes the insert and fetches the ResultSet automatically.
It depends on the database and driver.
Although you didn't ask for it, I will answer for Firebird ;). In Firebird/Jaybird the retrieval itself doesn't require extra roundtrips, but using Statement.RETURN_GENERATED_KEYS or the integer array version will require three extra roundtrips (prepare, execute, fetch) to determine the columns to request (I still need to build a form of caching for it). Using the version with a String array will not require extra roundtrips (I would love to have RETURNING * like in PostgreSQL...).
In PostgreSQL with PgJDBC there is no extra round-trip to fetch generated keys.
It sends a Parse/Describe/Bind/Execute message series followed by a Sync, then reads the results including the returned result-set. There's only one client/server round-trip required because the protocol pipelines requests.
However sometimes batches that can otherwise be streamed to the server may be broken up into smaller chunks or run one by on if generated keys are requested. To avoid this, use the String[] array form where you name the columns you want returned and name only columns of fixed-width data types like integer. This only matters for batches, and it's a due to a design problem in PgJDBC.
(I posted a patch to add batch pipelining support in libpq that doesn't have that limitation, it'll do one client/server round trip for arbitrary sized batches with arbitrary-sized results, including returning keys.)
MySQL receives the generated key(s) automatically in the OK packet of the protocol in response to executing a statement. There is no communication overhead when requesting generated keys.
In my opinion even for such a trivial thing a single approach working in all database systems will fail.
The only pragmatic solution is (in analogy to Hibernate) to find the best working solution for each target RDBMS (and
call it a dialect of your one for all solution:)
Here the information for Oracle
I'm using a sequence to generate key, same behavior is observed for IDENTITY column.
create table auto_pk
(id number,
pad varchar2(100));
This works and use only one roundtrip
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX')",
Statement.RETURN_GENERATED_KEYS)
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getGeneratedKeys()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getString(1);
But unfortunately you get ROWID as a result - not the generated key
How is it implemented internally? You can see it if you activate a 10046 trace (BTW this is also the best way to see
how many roundtrips were performed)
PARSING IN CURSOR
insert into auto_pk values(auto_pk_seq.nextval, 'XXX')
RETURNING ROWID INTO :1
END OF STMT
So you see the JDBC Standard 3.0 is implemented, but you don't get a requested result. Under the cover is used the
RETURNING clause.
The right approach to get the generated key in Oracle is therefore:
def stmt = con.prepareStatement("insert into auto_pk values(auto_pk_seq.nextval, 'XXX') returning id into ?")
stmt.registerReturnParameter(1, Types.INTEGER);
def rowCount = stmt.executeUpdate()
def generatedKeys = stmt.getReturnResultSet()
if (null != generatedKeys && generatedKeys.next()) {
def id = generatedKeys.getLong(1);
}
Note:
Oracle Release 12.1.0.2.0
To activate the 10046 trace use
con.createStatement().execute "alter session set events '10046 trace name context forever, level 12'"
con.createStatement().execute "ALTER SESSION SET tracefile_identifier = my_identifier"
Depending on frameworks or libraries to do things that are perfectly possible in plain SQL is bad design IMHO, especially when working against a defined DBMS. (The Statement.RETURN_GENERATED_KEYS is relatively innocuous, although it apparently does raise a question for you, but where frameworks are built on separate entities and doing all sorts of joins and filters in code or have custom-built transaction isolation logic things get inefficient and messy very quickly.)
Why not simply:
PreparedStatement ps= connection.prepareStatement(
"INSERT INTO post (title) VALUES (?) RETURNING id");
Single trip, defined result.
Assuming that all values of MBR_DTH_DT evaluate to a Date data type other than the value '00000000', could the following UPDATE SQL fail when running on multiple processors if the CAST were performed before the filter by racing threads?
UPDATE a
SET a.[MBR_DTH_DT] = cast(a.[MBR_DTH_DT] as date)
FROM [IPDP_MEMBER_DEMOGRAPHIC_DECBR] a
WHERE a.[MBR_DTH_DT] <> '00000000'
I am trying to find the source of the following error
Error: 2014-01-30 04:42:47.67
Code: 0xC002F210
Source: Execute csp_load_ipdp_member_demographic Execute SQL Task
Description: Executing the query "exec dbo.csp_load_ipdp_member_demographic" failed with the following error: "Conversion failed when converting date and/or time from character string.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
End Error
It could be another UPDATE or INSERT query, but the otehrs in question appear to have data that is proeprly typed from what I see,, so I am left onbly with the above.
No, it simply sounds like you have bad data in the MBR_DTH_DT column, which is VARCHAR but should be a date (once you clean out the bad data).
You can identify those rows using:
SELECT MBR_DTH_DT
FROM dbo.IPDP_MEMBER_DEMOGRAPHIC_DECBR
WHERE ISDATE(MBR_DTH_DT) = 0;
Now, you may only get rows that happen to match the where clause you're using to filter (e.g. MBR_DTH_DT = '00000000').
This has nothing to do with multiple processors, race conditions, etc. It's just that SQL Server can try to perform the cast before it applies the filter.
Randy suggests adding an additional clause, but this is not enough, because the CAST can still happen before any/all filters. You usually work around this by something like this (though it makes absolutely no sense in your case, when everything is the same column):
UPDATE dbo.IPDP_MEMBER_DEMOGRAPHIC_DECBR
SET MBR_DTH_DT = CASE
WHEN ISDATE(MBR_DTH_DT) = 1 THEN CAST(MBR_DTH_DT AS DATE)
ELSE MBR_DTH_DT END
WHERE MBR_DTH_DT <> '00000000';
(I'm not sure why in the question you're using UPDATE alias FROM table AS alias syntax; with a single-table update, this only serves to make the syntax more convoluted.)
However, in this case, this does you absolutely no good; since the target column is a string, you're just trying to convert a string to a date and back to a string again.
The real solution: stop using strings to store dates, and stop using token strings like '00000000' to denote that a date isn't available. Either use a dimension table for your dates or just live with NULL already.
Not likely. Even with multiple processors, there is no guarantee the query will processed in parallel.
Why not try something like this, assuming you're using SQL Server 2012. Even if you're not, you could write a UDF to validate a date like this.
UPDATE a
SET a.[MBR_DTH_DT] = cast(a.[MBR_DTH_DT] as date)
FROM [IPDP_MEMBER_DEMOGRAPHIC_DECBR] a
WHERE a.[MBR_DTH_DT] <> '00000000' And IsDate(MBR_DTH_DT) = 1
Most likely you have bad data are are not aware of it.
Whoops, just checked. IsDate has been available since SQL 2005. So try using it.
I have the following codes..
echo "<form><center><input type=submit name=subs value='Submit'></center></form>";
$val=$_POST['resulta']; //this is from a textarea name='resulta'
if (isset($_POST['subs'])) //from submit name='subs'
{
$aa=mysql_query("select max(reservno) as 'maxr' from reservation") or die(mysql_error()); //select maximum reservno
$bb=mysql_fetch_array($aa);
$cc=$bb['maxr'];
$lines = explode("\n", $val);
foreach ($lines as $line) {
mysql_query("insert into location_list (reservno, location) values ('$cc', '$line')")
or die(mysql_error()); //insert value of textarea then save it separately in location_list if \n is found
}
If I input the following data on the textarea (assume that I have maximum reservno '00014' from reservation table),
Davao - Cebu
Cebu - Davao
then submit it, I'll have these data in my location_list table:
loc_id || reservno || location
00001 || 00014 || Davao - Cebu
00002 || 00014 || Cebu - Davao
Then this code:
$gg=mysql_query("SELECT GROUP_CONCAT(IF((#var_ctr := #var_ctr + 1) = #cnt,
location,
SUBSTRING_INDEX(location,' - ', 1)
)
ORDER BY loc_id ASC
SEPARATOR ' - ') AS locations
FROM location_list,
(SELECT #cnt := COUNT(1), #var_ctr := 0
FROM location_list
WHERE reservno='$cc'
) dummy
WHERE reservno='$cc'") or die(mysql_error()); //QUERY IN QUESTION
$hh=mysql_fetch_array($gg);
$ii=$hh['locations'];
mysql_query("update reservation set itinerary = '$ii' where reservno = '$cc'")
or die(mysql_error());
is supposed to update reservation table with 'Davao - Cebu - Davao' but it's returning this instead, 'Davao - Cebu - Cebu'. I was previously helped by this forum to have this code working but now I'm facing another difficulty. Just can't get it to work. Please help me. Thanks in advance!
I got it working (without ORDER BY loc_id ASC) as long as I set phpMyAdmin operations loc_id ascending. But whenever I delete all data, it goes back as loc_id descending so I have to reset it. It doesn't entirely solve the problem but I guess this is as far as I can go. :)) I just have to make sure that the table column loc_id is always in ascending order. Thank you everyone for your help! I really appreciate it! But if you have any better answer, like how to set the table column always in ascending order or better query, etc, feel free to post it here. May God bless you all!
The database server is allowed to rewrite your query to optimize its execution. This might affect the order of the individual parts, in particular the order in which the various assignments are executed. I assume that some such reodering causes the result of the query to become undefined, in such a way that it works on sqlfiddle but not on your actual production system.
I can't put my finger on the exact location where things go wrong, but I believe that the core of the problem is the fact that SQL is intended to work on relations, but you try to abuse it for sequential programming. I suggest you retrieve the data from the database using portable SQL without any variable hackery, and then use PHP to perform any post-processing you might need. PHP is much better suited to express the ideas you're formulating, and no optimization or reordering of statements will get in your way there. And as your query currently only results in a single value, fetching multiple rows and combining them into a single value in the PHP code shouldn't increase complexety too much.
Edit:
While discussing another answer using a similar technique (by Omesh as well, just as the answer your code is based upon), I found this in the MySQL manual:
As a general rule, you should never assign a value to a user variable
and read the value within the same statement. You might get the
results you expect, but this is not guaranteed. The order of
evaluation for expressions involving user variables is undefined and
may change based on the elements contained within a given statement;
in addition, this order is not guaranteed to be the same between
releases of the MySQL Server.
So there are no guarantees about the order these variable assignments are evaluated, therefore no guarantees that the query does what you expect. It might work, but it might fail suddenly and unexpectedly. Therefore I strongly suggest you avoid this approach unless you have some relaibale mechanism to check the validity of the results, or really don't care about whether they are valid.