MySQL Stored Procedure - Checking if certain conditions are met one by one; if one is not met, exit procedure and return a specific message - mysql

I am trying to make a MySQL stored procedure that processes a book purchase and inserts records into other tables about the purchase. However, these insertions can only happen if three conditions are met: the customer is in the system, the book is in the system, and there is enough quantity.
I want to check for each condition individually, and if it passes the first condition, it moves to the next, but if it doesn't, I want it to end the procedure and return a value, and so on for each condition. If it passes all three conditions, the insertions can happen. Here's how I coded it:
DELIMITER //
CREATE PROCEDURE process_purchase(
IN book_key INT,
IN customer_key INT,
IN quantity INT
)
BEGIN
DECLARE book_inventory_key_var INT;
DECLARE purchase_key_var INT;
SELECT book_inventory_key
INTO book_inventory_key_var
FROM book_inventory
WHERE book_key = book_key.book_inventory_key;
SELECT purchase_key
INTO purchase_key_var
FROM purchases
WHERE customer_key = customer_key.purchases;
IF customer_key != customer_key.customers THEN
SELECT '-1';
ELSEIF book_key != book_key.books THEN
SELECT '-2';
ELSEIF quantity < quantity_on_stock(book_key) THEN
SELECT '-3';
ELSE
INSERT INTO purchases VALUES (customer_key, CURDATE());
INSERT INTO purchase_items (book_inventory_key, purchase_key, quantity) VALUES (book_inventory_key_var, purchase_key_var, quantity);
SELECT '1';
END IF;
END//
DELIMITER ;
I compare the customer and book keys to their values in the other tables, and the quantity to the quantity_on_stock stored function I previously made. I use a chain of IF-ELSEIF to go through each condition one by one, and if all of them are passed, the insertions occur. If not, it won't go to the next condition, and will return the SELECT message, correct? The procedure runs without errors, but I am unsure if this is the correct method, or if there's a better way of going about this.

Checking sequentially is subject to race conditions. Breaking this paradigm is key to moving from a procedural to SQL based method. Use the database features of to obtain consistency rather than procedural code.
purchase_items should have foreign key constraints to book_key and customer_key tables. If an insert generates a FK exception then one of these apply depending on the error. DECLARE HANDLER will help catch these errors.
For the quantity:
INSERT INTO purchase_items (book_inventory_key, purchase_key, quantity)
SELECT book_key, purchase_key, quantity
FROM books WHERE book_key = book.id AND book.available >= quantity
If there are no ROW_COUNT for this, then there wasn't sufficient quantity.
You will also need to reduce the amount of books available in the same SQL transaction.
If you don't have to do this in a STORED PROCEDURE, don't. A lot of the constructs here are easier in application code. If this is an assignment, where store procedures are require to get extra marks, get through it, and never write a stored procedure again.

Related

In MySQL, how do you change the value of a column in a table depending on an updated column in another table?

I'm attempting to create a trigger that increases the value of a column INCOME in the Salary database by 500 each time the value of WorkYear in the Employee table is increased by one year. For example, if the workYear is 4 and the salary is 1000, the salary should be 1500 if the workYear is increased by one year, 2000 if the workYear is increased by two years, and so on.
I tried to create such trigger and here is my code :
DELIMITER $$
create trigger increment AFTER UPDATE on employee
for each row
BEGIN
IF OLD.workYear <> new.workYear THEN
update salary
set income = (income + (new.workYear-old.workYear)*500);
END IF;
END$$
The idea behind this code is that after we update the workYear, the trigger should increase the salary by the difference of years * 500, (new.workYear-old.workYear)*500, but it increases all the rows by the same number, (5500 if we add one year, 27500 if we add two years, etc.) which not what we are looking for .
I am new to MySQL and would appreciate it if someone could assist me with this.
Thanks in advance
FaissalHamdi
In MySQL an AFTER trigger can affect the entire table, so you must declare the update scope in the form of criteria or a join.
Create Trigger in MySQL
To distinguish between the value of the columns BEFORE and AFTER the DML has fired, you use the NEW and OLD modifiers.
The concept is similar but each RDBMS has a slightly different syntax for this, be careful to search for help specifically on your RDBMS.
In the original query these special table references were used to evaluate the change condition however the scope of the update was not defined.
Assuming that there is a primary key field called Id on this salary table.
Also note that if you can, the query should be expressed in the form of a set-based operation, instead of static procedural script, this will be more conformant to other database engines.
So lets try this:
DELIMITER $$
create trigger increment AFTER UPDATE on employee
for each row
BEGIN
UPDATE salary s
SET income = (income + (new.workYear-old.workYear)*500)
WHERE s.Id = OLD.Id
END$$

MySQL: Run a stored procedure getting parameter from a query inside another stored procedure

I am new to MySQL.
I am developing a system where many users are assigned to specific tasks. When they are inactive for a certain period of time (lets say more than 10 minutes) I would like the system automatically clear their assignments so that others can work on them.
To achieve that I have created a table called tblactivitytracker for activity tracking. Assignments are in a table called tblinquiries. I have created a stored procedure to get the inactive users.
Here is an sqlfiddle example: Get Inactive Users
In the above example I get 3 inactive users: auditor1, auditor2 and auditor3.
I have created a stored procedure to clear assignment of a single user which does the job perfectly.
CREATE PROCEDURE `spClearAssignedInquiry`(IN `pAssignedTo` VARCHAR(50))
UPDATE
tblinquiries
SET
AuditStatus='Check', AssignedTo=NULL, Result=NULL,
ResultCategories=NULL, AuditBy=NULL,
Remarks=NULL, StartTime=NULL, EndTime=NULL
WHERE
AssignedTo=pAssignedTo AND
AuditStatus='Assigned' AND EndTime IS NULL
If I pass auditor1 as a parameter in the above procedure it will clear the user's assignment.
To pass all inactive users and clear the assignments in a single go I tried the below procedure following this stackoverflow solution:
CREATE PROCEDURE `spInactiveUsers`()
BEGIN
DECLARE done BOOLEAN DEFAULT FALSE;
DECLARE AssignedTo VARCHAR(50);
DECLARE cur CURSOR FOR
SELECT
q1.AssignedTo AS AssignedTo
FROM
(SELECT
InquiryId, AssignedTo
FROM
tblinquiries
WHERE
AuditStatus='Assigned' AND StartTime IS NOT NULL AND EndTime IS NULL
ORDER BY
AssignedTo ASC
) q1
RIGHT JOIN
(SELECT
UserId, MAX(LastActivity) AS LastActivity, ROUND(TIME_TO_SEC(TIMEDIFF(MAX(LastActivity),CURRENT_TIMESTAMP()))/60,0) AS InactiveMinutes
FROM
tblactivitytracker
GROUP BY
UserId
ORDER BY
LastActivity ASC
) q2
ON
q2.UserId=q1.AssignedTo
WHERE
q2.InactiveMinutes>10;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done := TRUE;
OPEN cur;
testLoop: LOOP
FETCH cur INTO AssignedTo;
IF done THEN
LEAVE testLoop;
END IF;
CALL spClearAssignedInquiry(AssignedTo);
END LOOP testLoop;
CLOSE cur;
END
But it does not clear any of the assignments.
I am banging my head to the wall for the last couple of days. Any help would be much appreciated. Thanks in advance.
You are using a variable name that is also the name of a column. The value of the variable will take precedence over the column value, see the documentation:
A local variable should not have the same name as a table column. If an SQL statement, such as a SELECT ... INTO statement, contains a reference to a column and a declared local variable with the same name, MySQL currently interprets the reference as the name of a variable.
So in
...
FROM
(SELECT
InquiryId, AssignedTo
...
you are selecting the variable AssignedTo (which is null), not the column from your table.
Just rename it (in declare and the loop), or, less advised, explicitly state the tablename to set the scope, e.g. SELECT InquiryId, tblinquiries.AssignedTo .... order by tblinquiries.AssignedTo.
There is another (minor) problem is your use of TIMEDIFF in TIMEDIFF(MAX(LastActivity), CURRENT_TIMESTAMP()). It requires the later time in the first argument if you want to get a positive number (as in q2.InactiveMinutes>10).

UPDATE primary key in POSTGRES database [duplicate]

Several months ago I learned from an answer on Stack Overflow how to perform multiple updates at once in MySQL using the following syntax:
INSERT INTO table (id, field, field2) VALUES (1, A, X), (2, B, Y), (3, C, Z)
ON DUPLICATE KEY UPDATE field=VALUES(Col1), field2=VALUES(Col2);
I've now switched over to PostgreSQL and apparently this is not correct. It's referring to all the correct tables so I assume it's a matter of different keywords being used but I'm not sure where in the PostgreSQL documentation this is covered.
To clarify, I want to insert several things and if they already exist to update them.
PostgreSQL since version 9.5 has UPSERT syntax, with ON CONFLICT clause. with the following syntax (similar to MySQL)
INSERT INTO the_table (id, column_1, column_2)
VALUES (1, 'A', 'X'), (2, 'B', 'Y'), (3, 'C', 'Z')
ON CONFLICT (id) DO UPDATE
SET column_1 = excluded.column_1,
column_2 = excluded.column_2;
Searching postgresql's email group archives for "upsert" leads to finding an example of doing what you possibly want to do, in the manual:
Example 38-2. Exceptions with UPDATE/INSERT
This example uses exception handling to perform either UPDATE or INSERT, as appropriate:
CREATE TABLE db (a INT PRIMARY KEY, b TEXT);
CREATE FUNCTION merge_db(key INT, data TEXT) RETURNS VOID AS
$$
BEGIN
LOOP
-- first try to update the key
-- note that "a" must be unique
UPDATE db SET b = data WHERE a = key;
IF found THEN
RETURN;
END IF;
-- not there, so try to insert the key
-- if someone else inserts the same key concurrently,
-- we could get a unique-key failure
BEGIN
INSERT INTO db(a,b) VALUES (key, data);
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing, and loop to try the UPDATE again
END;
END LOOP;
END;
$$
LANGUAGE plpgsql;
SELECT merge_db(1, 'david');
SELECT merge_db(1, 'dennis');
There's possibly an example of how to do this in bulk, using CTEs in 9.1 and above, in the hackers mailing list:
WITH foos AS (SELECT (UNNEST(%foo[])).*)
updated as (UPDATE foo SET foo.a = foos.a ... RETURNING foo.id)
INSERT INTO foo SELECT foos.* FROM foos LEFT JOIN updated USING(id)
WHERE updated.id IS NULL;
See a_horse_with_no_name's answer for a clearer example.
Warning: this is not safe if executed from multiple sessions at the same time (see caveats below).
Another clever way to do an "UPSERT" in postgresql is to do two sequential UPDATE/INSERT statements that are each designed to succeed or have no effect.
UPDATE table SET field='C', field2='Z' WHERE id=3;
INSERT INTO table (id, field, field2)
SELECT 3, 'C', 'Z'
WHERE NOT EXISTS (SELECT 1 FROM table WHERE id=3);
The UPDATE will succeed if a row with "id=3" already exists, otherwise it has no effect.
The INSERT will succeed only if row with "id=3" does not already exist.
You can combine these two into a single string and run them both with a single SQL statement execute from your application. Running them together in a single transaction is highly recommended.
This works very well when run in isolation or on a locked table, but is subject to race conditions that mean it might still fail with duplicate key error if a row is inserted concurrently, or might terminate with no row inserted when a row is deleted concurrently. A SERIALIZABLE transaction on PostgreSQL 9.1 or higher will handle it reliably at the cost of a very high serialization failure rate, meaning you'll have to retry a lot. See why is upsert so complicated, which discusses this case in more detail.
This approach is also subject to lost updates in read committed isolation unless the application checks the affected row counts and verifies that either the insert or the update affected a row.
With PostgreSQL 9.1 this can be achieved using a writeable CTE (common table expression):
WITH new_values (id, field1, field2) as (
values
(1, 'A', 'X'),
(2, 'B', 'Y'),
(3, 'C', 'Z')
),
upsert as
(
update mytable m
set field1 = nv.field1,
field2 = nv.field2
FROM new_values nv
WHERE m.id = nv.id
RETURNING m.*
)
INSERT INTO mytable (id, field1, field2)
SELECT id, field1, field2
FROM new_values
WHERE NOT EXISTS (SELECT 1
FROM upsert up
WHERE up.id = new_values.id)
See these blog entries:
Upserting via Writeable CTE
WAITING FOR 9.1 – WRITABLE CTE
WHY IS UPSERT SO COMPLICATED?
Note that this solution does not prevent a unique key violation but it is not vulnerable to lost updates.
See the follow up by Craig Ringer on dba.stackexchange.com
In PostgreSQL 9.5 and newer you can use INSERT ... ON CONFLICT UPDATE.
See the documentation.
A MySQL INSERT ... ON DUPLICATE KEY UPDATE can be directly rephrased to a ON CONFLICT UPDATE. Neither is SQL-standard syntax, they're both database-specific extensions. There are good reasons MERGE wasn't used for this, a new syntax wasn't created just for fun. (MySQL's syntax also has issues that mean it wasn't adopted directly).
e.g. given setup:
CREATE TABLE tablename (a integer primary key, b integer, c integer);
INSERT INTO tablename (a, b, c) values (1, 2, 3);
the MySQL query:
INSERT INTO tablename (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
becomes:
INSERT INTO tablename (a, b, c) values (1, 2, 10)
ON CONFLICT (a) DO UPDATE SET c = tablename.c + 1;
Differences:
You must specify the column name (or unique constraint name) to use for the uniqueness check. That's the ON CONFLICT (columnname) DO
The keyword SET must be used, as if this was a normal UPDATE statement
It has some nice features too:
You can have a WHERE clause on your UPDATE (letting you effectively turn ON CONFLICT UPDATE into ON CONFLICT IGNORE for certain values)
The proposed-for-insertion values are available as the row-variable EXCLUDED, which has the same structure as the target table. You can get the original values in the table by using the table name. So in this case EXCLUDED.c will be 10 (because that's what we tried to insert) and "table".c will be 3 because that's the current value in the table. You can use either or both in the SET expressions and WHERE clause.
For background on upsert see How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
I was looking for the same thing when I came here, but the lack of a generic "upsert" function botherd me a bit so I thought you could just pass the update and insert sql as arguments on that function form the manual
that would look like this:
CREATE FUNCTION upsert (sql_update TEXT, sql_insert TEXT)
RETURNS VOID
LANGUAGE plpgsql
AS $$
BEGIN
LOOP
-- first try to update
EXECUTE sql_update;
-- check if the row is found
IF FOUND THEN
RETURN;
END IF;
-- not found so insert the row
BEGIN
EXECUTE sql_insert;
RETURN;
EXCEPTION WHEN unique_violation THEN
-- do nothing and loop
END;
END LOOP;
END;
$$;
and perhaps to do what you initially wanted to do, batch "upsert", you could use Tcl to split the sql_update and loop the individual updates, the preformance hit will be very small see http://archives.postgresql.org/pgsql-performance/2006-04/msg00557.php
the highest cost is executing the query from your code, on the database side the execution cost is much smaller
There is no simple command to do it.
The most correct approach is to use function, like the one from docs.
Another solution (although not that safe) is to do update with returning, check which rows were updates, and insert the rest of them
Something along the lines of:
update table
set column = x.column
from (values (1,'aa'),(2,'bb'),(3,'cc')) as x (id, column)
where table.id = x.id
returning id;
assuming id:2 was returned:
insert into table (id, column) values (1, 'aa'), (3, 'cc');
Of course it will bail out sooner or later (in concurrent environment), as there is clear race condition in here, but usually it will work.
Here's a longer and more comprehensive article on the topic.
I use this function merge
CREATE OR REPLACE FUNCTION merge_tabla(key INT, data TEXT)
RETURNS void AS
$BODY$
BEGIN
IF EXISTS(SELECT a FROM tabla WHERE a = key)
THEN
UPDATE tabla SET b = data WHERE a = key;
RETURN;
ELSE
INSERT INTO tabla(a,b) VALUES (key, data);
RETURN;
END IF;
END;
$BODY$
LANGUAGE plpgsql
Personally, I've set up a "rule" attached to the insert statement. Say you had a "dns" table that recorded dns hits per customer on a per-time basis:
CREATE TABLE dns (
"time" timestamp without time zone NOT NULL,
customer_id integer NOT NULL,
hits integer
);
You wanted to be able to re-insert rows with updated values, or create them if they didn't exist already. Keyed on the customer_id and the time. Something like this:
CREATE RULE replace_dns AS
ON INSERT TO dns
WHERE (EXISTS (SELECT 1 FROM dns WHERE ((dns."time" = new."time")
AND (dns.customer_id = new.customer_id))))
DO INSTEAD UPDATE dns
SET hits = new.hits
WHERE ((dns."time" = new."time") AND (dns.customer_id = new.customer_id));
Update: This has the potential to fail if simultaneous inserts are happening, as it will generate unique_violation exceptions. However, the non-terminated transaction will continue and succeed, and you just need to repeat the terminated transaction.
However, if there are tons of inserts happening all the time, you will want to put a table lock around the insert statements: SHARE ROW EXCLUSIVE locking will prevent any operations that could insert, delete or update rows in your target table. However, updates that do not update the unique key are safe, so if you no operation will do this, use advisory locks instead.
Also, the COPY command does not use RULES, so if you're inserting with COPY, you'll need to use triggers instead.
Similar to most-liked answer, but works slightly faster:
WITH upsert AS (UPDATE spider_count SET tally=1 WHERE date='today' RETURNING *)
INSERT INTO spider_count (spider, tally) SELECT 'Googlebot', 1 WHERE NOT EXISTS (SELECT * FROM upsert)
(source: http://www.the-art-of-web.com/sql/upsert/)
I custom "upsert" function above, if you want to INSERT AND REPLACE :
`
CREATE OR REPLACE FUNCTION upsert(sql_insert text, sql_update text)
RETURNS void AS
$BODY$
BEGIN
-- first try to insert and after to update. Note : insert has pk and update not...
EXECUTE sql_insert;
RETURN;
EXCEPTION WHEN unique_violation THEN
EXECUTE sql_update;
IF FOUND THEN
RETURN;
END IF;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION upsert(text, text)
OWNER TO postgres;`
And after to execute, do something like this :
SELECT upsert($$INSERT INTO ...$$,$$UPDATE... $$)
Is important to put double dollar-comma to avoid compiler errors
check the speed...
According the PostgreSQL documentation of the INSERT statement, handling the ON DUPLICATE KEY case is not supported. That part of the syntax is a proprietary MySQL extension.
I have the same issue for managing account settings as name value pairs.
The design criteria is that different clients could have different settings sets.
My solution, similar to JWP is to bulk erase and replace, generating the merge record within your application.
This is pretty bulletproof, platform independent and since there are never more than about 20 settings per client, this is only 3 fairly low load db calls - probably the fastest method.
The alternative of updating individual rows - checking for exceptions then inserting - or some combination of is hideous code, slow and often breaks because (as mentioned above) non standard SQL exception handling changing from db to db - or even release to release.
#This is pseudo-code - within the application:
BEGIN TRANSACTION - get transaction lock
SELECT all current name value pairs where id = $id into a hash record
create a merge record from the current and update record
(set intersection where shared keys in new win, and empty values in new are deleted).
DELETE all name value pairs where id = $id
COPY/INSERT merged records
END TRANSACTION
CREATE OR REPLACE FUNCTION save_user(_id integer, _name character varying)
RETURNS boolean AS
$BODY$
BEGIN
UPDATE users SET name = _name WHERE id = _id;
IF FOUND THEN
RETURN true;
END IF;
BEGIN
INSERT INTO users (id, name) VALUES (_id, _name);
EXCEPTION WHEN OTHERS THEN
UPDATE users SET name = _name WHERE id = _id;
END;
RETURN TRUE;
END;
$BODY$
LANGUAGE plpgsql VOLATILE STRICT
For merging small sets, using the above function is fine. However, if you are merging large amounts of data, I'd suggest looking into http://mbk.projects.postgresql.org
The current best practice that I'm aware of is:
COPY new/updated data into temp table (sure, or you can do INSERT if the cost is ok)
Acquire Lock [optional] (advisory is preferable to table locks, IMO)
Merge. (the fun part)
UPDATE will return the number of modified rows. If you use JDBC (Java), you can then check this value against 0 and, if no rows have been affected, fire INSERT instead. If you use some other programming language, maybe the number of the modified rows still can be obtained, check documentation.
This may not be as elegant but you have much simpler SQL that is more trivial to use from the calling code. Differently, if you write the ten line script in PL/PSQL, you probably should have a unit test of one or another kind just for it alone.
Edit: This does not work as expected. Unlike the accepted answer, this produces unique key violations when two processes repeatedly call upsert_foo concurrently.
Eureka! I figured out a way to do it in one query: use UPDATE ... RETURNING to test if any rows were affected:
CREATE TABLE foo (k INT PRIMARY KEY, v TEXT);
CREATE FUNCTION update_foo(k INT, v TEXT)
RETURNS SETOF INT AS $$
UPDATE foo SET v = $2 WHERE k = $1 RETURNING $1
$$ LANGUAGE sql;
CREATE FUNCTION upsert_foo(k INT, v TEXT)
RETURNS VOID AS $$
INSERT INTO foo
SELECT $1, $2
WHERE NOT EXISTS (SELECT update_foo($1, $2))
$$ LANGUAGE sql;
The UPDATE has to be done in a separate procedure because, unfortunately, this is a syntax error:
... WHERE NOT EXISTS (UPDATE ...)
Now it works as desired:
SELECT upsert_foo(1, 'hi');
SELECT upsert_foo(1, 'bye');
SELECT upsert_foo(3, 'hi');
SELECT upsert_foo(3, 'bye');
PostgreSQL >= v15
Big news on this topic as in PostgreSQL v15, it is possible to use MERGE command. In fact, this long awaited feature was listed the first of the improvements of the v15 release.
This is similar to INSERT ... ON CONFLICT but more batch-oriented. It has a powerful WHEN MATCHED vs WHEN NOT MATCHED structure that gives the ability to INSERT, UPDATE or DELETE on such conditions.
It not only eases bulk changes, but it even adds more control that tradition UPSERT and INSERT ... ON CONFLICT
Take a look at this very complete sample from official page:
MERGE INTO wines w
USING wine_stock_changes s
ON s.winename = w.winename
WHEN NOT MATCHED AND s.stock_delta > 0 THEN
INSERT VALUES(s.winename, s.stock_delta)
WHEN MATCHED AND w.stock + s.stock_delta > 0 THEN
UPDATE SET stock = w.stock + s.stock_delta
WHEN MATCHED THEN
DELETE;
PostgreSQL v9, v10, v11, v12, v13, v14
If version is under v15 and over v9.5 , probably best choice is to use UPSERT syntax, with ON CONFLICT clause
Here is the example how to do upsert with params and without special sql constructions
if you have special condition (sometimes you can't use 'on conflict' because you can't create constraint)
WITH upd AS
(
update view_layer set metadata=:metadata where layer_id = :layer_id and view_id = :view_id returning id
)
insert into view_layer (layer_id, view_id, metadata)
(select :layer_id layer_id, :view_id view_id, :metadata metadata FROM view_layer l
where NOT EXISTS(select id FROM upd WHERE id IS NOT NULL) limit 1)
returning id
maybe it will be helpful

Store Procedure - SLOW insert + possible VARs conflict

I'm working on a messaging system that basically consists of two tables: CONVERSATIONS and MESSAGES. The procedure checks if there is a conversation, and if there is it saves the conversation ID into a VAR and uses that for the message insert. IF THERE IS NOT it creates one entry in the conversation table and uses the LAST ID as an entry value for the MESSAGES table.
SO far it works just fine but I have 2 issues:
In my ignorant view the so defined VARS #last_id and #conv_id may incur in some sort of conflict IF more users are calling the procedure at the SAME TIME. Imagine #last_id is set to 40 and before the insert happens an other user calls the same procedure and that value gets set to 41.. Is that a possibility???
This INSERT procedure seems to be PRETTY slow, once cached though it seems to get a bit faster but I'm not happy with it. Is there any solution beside INDEXING?
Thank you.
DELIMITER //
CREATE PROCEDURE chat_insert(id1 INT, id2 INT, text VARCHAR(250))
BEGIN
SELECT id INTO #conv_id FROM conversations WHERE
(user_id_one = id1 AND user_id_two = id2)
OR (user_id_one = id2 AND user_id_two = id1) LIMIT 1;
IF(#conv_id IS NOT NULL)
THEN
INSERT INTO messages (conversations_id,user_id,text)
VALUES (#conv_id,id1,text);
ELSE
INSERT INTO conversations (user_id_one,user_id_2)
VALUES (id1,id2);
SET #last_id = LAST_INSERT_ID();
INSERT INTO messages (conversations_id,user_id,text)
VALUES (#last_id,id1,text);
END IF;
END
//

Call a Stored Procedure From a Stored Procedure and/or using COUNT

Ok, First off, I am not a mysql guru. Second, I did search, but saw nothing relevant related to mysql, and since my DB knowledge is limited, guessing syntactical differences between two different Database types just isn't in the cards.
I am trying to determine if a particular value already exists in a table before inserting a row. I've decided to go about this using two Stored procedures. The first:
CREATE PROCEDURE `nExists` ( n VARCHAR(255) ) BEGIN
SELECT COUNT(*) FROM (SELECT * FROM Users WHERE username=n) as T;
END
And The Second:
CREATE PROCEDURE `createUser` ( n VARCHAR(255) ) BEGIN
IF (nExists(n) = 0) THEN
INSERT INTO Users...
END IF;
END
So, as you can see, I'm attempting to call nExists from createUser. I get the error that no Function exists with the name nExists...because it's a stored procedure. I'm not clear on what the difference is, or why such a difference would be necessary, but I'm a Java dev, so maybe I'm missing some grand DB-related concept here.
Could you guys help me out by any chance?
Thanks
I'm not sure how it helped you, but...
why SELECT COUNT(*) FROM (SELECT * FROM Users WHERE username=n) and not just SELECT COUNT(*) FROM Users WHERE username=n?
Just make the user name (or whatever the primary application index is) a UNIQUE index and then there is no need to test: Just try to insert a new record. If it already exists, handle the error. If it succeeds, all is well.
It can (and should) all be one stored procedure.