UPDATE 11.15.2022
I have conducted extensive testing and found the pattern of problem here. Once again, what's strange is this ONLY happens if you pass a function as a parameter to the originating Stored Procedure; passing a hardcoded value or variable works just fine.
The issue is when the Stored Procedure calls another Stored Procedure that checks ##read_only to see if it can WRITE to the database. I confirmed removing any code that writes data fixes the issue -- so ultimately it appears passing a STATIC value to the SP causes the procedure execution to bypass any writing (as expected) because of the IF ##read_only = FALSE THEN ...write...
It seems passing a function somehow causes MySQL to compile a "tree" of calls and subcalls to see if they CAN write rather than if they DO write.
It appears the only way to work around this is to pass the parameters as variables rather than function calls. We can do this, but it will require substantial refactoring.
I just wonder why MySQL is doing this - why passing a function is causing the system to look ahead and see IF it COULD write rather than if it does.
We have a Read Replica that's up and running just fine. We can execute reads against it without a problem.
We can do this:
CALL get_table_data(1, 1, "SELECT * from PERSON where ID=1;", #out_result, #out_result_value);
And it executes fine. Note it's READS SQL DATA tagged. It does not write anything out.
We can also do this:
SELECT get_value("OBJECT_BASE", "NAME");
Which is SELECT function that is READ ONLY.
However, if we try to execute this:
CALL get_table_data(1, get_value("OBJECT_BASE", "NAME"), "SELECT * from PERSON where ID=1;", #out_result, #out_result_value);
We get the error:
Error: ER_OPTION_PREVENTS_STATEMENT: The MySQL server is running with the --read-only option so it cannot execute this statement
We're baffled at what could cause this. Both the SP and function are read-only and execute individually just fine, but the second we embed the function result in the call of the SP, the system chokes.
Any ideas?
So AWS cannot figure this out. The issue only happens when a function is passed as a parameter to a stored procedure that calls another stored procedure (not even passing the value of the function) that has a ##read_only check before doing an INSERT or UPDATE. So for some reason, the system is doing a pre-scan check when a function is passed vs. a variable or hardcoded value.
The workaround is to pass the function value as a variable.
I'm going to report this issue to Oracle as it might be some sort of bug, especially given the function is DETERMINISTIC.
Hi I've an SSIS Package which Does tasks like a)Truncate Last 7 days data from a table if any and then re-load it and all theses are placed in a sequence container and it run's fine.Now I'm planning to remove the hard coded value of 7 and introduce a variable NoOfDays which I can provide at run time.Can this be achieved?
I added a variable and tried to map it to the parameter of the ExecuteSQL Task...It gave the following error:
I even want the value to be available to the next step Data Flow Task
[Execute SQL Task] Error: Executing the query "delete from USER_CONTENT where CONVERT..." failed with the following error: "The variable name '#NoOfDays' has already been declared. Variable names must be unique within a query batch or stored procedure.
Statement(s) could not be prepared.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
The query I'm using is
delete from USER_CONTENT
where CONVERT(date,ISSUE_DATE)>=CONVERT(DATE,GETDATE()-7)
and it is against an OLE-DB connection.
I do not have enough points to post a comment so I will have to ask you this way: are you using query parameters with a "?", if not, then in order to use a package variable with a stored procedure, you will need to write your statement to use them, also, don't name your parameters, use the default 0,1,2 etc.. that the package assigns.
I am retrieving data from a mysql database in matlab.
conn=database('my_database','','');
sql = 'call latest_hl_tradables()';
curs = exec(conn,sql);
curs = fetch(curs);
Yesterday, the code returned 600 rows. This morning, it returns 1 row. If I run the stored procedure (latest_hl_tradables) in MySQL Workbench, it still returns 600 rows.
Strangely, it the code started working again.
All I did was write some diagnostic code to count the number of records in the tables that latest_hl_tradables() queries. Initially those only returned 1 row. Then when they started returning all the rows. I don't know what changed.
(My configuration is R2014b / SQL Server 2014 / MS JDBC 4.0. I believe the OP describes a generic issue, not database-vendor-specific)
I have also experienced unreliable results within MATLAB from stored procedures which return result sets. I submitted a service request to Mathworks. The short answer is that in this use case, neither of the Matlab Database Toolbox functions exec or runstoredprocedure are appropriate. Rather, stored procedures should be called from MATLAB via runsqlscript. Here's the tech's response:
Unfortunately there is no way to get output of the query using the
cursor object. The only way to get the output on DML is by running
them as a script. The syntax is:
>>results = runsqlscript(connObject,'ScriptName.sql')
where the SQL queries are placed in the file "ScriptName.sql".
This will return cell array of results as the output
Note that this solution makes life complicated when your stored procedure requires input parameters. In typical cases when sp parameters aren't known a priori, this means you have to generate a custom SQL script on-the-fly & write it out to disk.
I'm still relatively new to SQL Server but love a lot of things about it, except for the array of "all-slightly-different-but-none-can-do-everything", "finicky-in-different-ways" scripting options where, just when you feel like you're starting to get a handle on things and are cruising, you slam into yet another roadblock. I've been down the dynamic SQL path (and have found the restrictions on variables having short lifetimes) and as per a previous suggestion that I received ( Script to create a schema using a variable ), am now trying to write sqlcmd scripts instead.
A lot of scripts work fine and dandy if you run them "naked". As soon as you put some of them into a Try / Catch block to implement error handling on them, however, you often run into ridiculous restrictions most notably DDL commands which "vant to be alone" and need to be the first/only statement in a batch. Go is useless in this context because if you put THAT anywhere inside a Try/Catch block it guarantees that you'll get a syntax error.
Obviously I've scoured the web on this (and have looked at some of the "similar questions" that appeared while editing this post) but keep coming up with examples which are either "naked" in the sense above, or are Try/Catch examples on code which doesn't have these restrictions.
In the case of creating a schema, I used the approach that had been suggested to me for dynamic SQL; I ran it through sp_executesql. That's not a problem since it's essentially one line of code, the problem is that I hit it again when I tried to create a trigger on a table (and am guessing that I will with some other Create commands).
CREATE TRIGGER MySchema.NoDelete
ON MySchema.MyTable
INSTEAD OF DELETE
AS
BEGIN
SET NOCOUNT ON;
RAISERROR ('Deletions are not allowed on this table', 16,1)
END
Run this by itself and it's fine. Put Begin Try before it and End Try and a Catch block after it and you get:
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'TRIGGER'.
with a red squiggly line on BEGIN with the bogus tool tip "Incorrect syntax near Begin, expecting External".
I tried the sp_executesql path with this again but:
First, it also generated a bogus "Syntax error near $" error which is about as useful as telling me that there's some syntactical error, somewhere, between here and the planet Zargthorp" but more importantly:
Second, even if I did get it to work for a relatively trivial trigger like this one, I'm having nightmares trying to imagine packaging a complex, multi-line trigger in such a fashion but even if I DID get past that;
Third, it would make the code much more obscure and defeat one of the purposes of using scripts in the first place, that being self-documentation.
My questions therefore are:
Is Try/Catch effectively useless for commands which, for reasons best known to MS designers, need to live in isolated majesty like Create Trigger (and which aren't one-liners like Create Schema which can be neatly packaged up into sp_executesql); or
In my relative newbieness have I missed some other way of working around the kind of restriction that I've slammed into with Create Trigger?
Thanks in advance for any responses.
When I drag a particular stored procedure into the VS 2008 dbml designer, it shows up with Return Type set to "none", and it's read only so I can't change it. The designer code shows it as returning an int, and if I change that manually, it just gets undone on the next build.
But with another (nearly identical) stored procedure, I can change the return type just fine (from "Auto Generated Type" to what I want.)
I've run into this problem on two separate machines. Any idea what's going on?
Here's the stored procedure that works:
USE [studio]
GO
/****** Object: StoredProcedure [dbo].[GetCourseAnnouncements] Script Date: 05/29/2009 09:44:51 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[GetCourseAnnouncements]
#course int
AS
SELECT * FROM Announcements WHERE Announcements.course = #course
RETURN
And this one doesn't:
USE [studio]
GO
/****** Object: StoredProcedure [dbo].[GetCourseAssignments] Script Date: 05/29/2009 09:45:32 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
CREATE PROCEDURE [dbo].[GetCourseAssignments]
#course int
AS
SELECT * FROM Assignments WHERE Assignments.course = #course ORDER BY date_due ASC
RETURN
I've also seen this problem several times and while I don't know what causes it, I've come across a pretty easy way to get past it. It involves manually editing the xml within the .dbml file, but it's a pretty simple edit.
Right-click on your Data Context's .dbml file in the Solution Explorer (not the .layout file nor the designer.cs file) and open it with the XML Editor. You should find your stored procedure listed in a <Function> ... </Function> block. You should also find the custom class you would like to set as the Return Type listed in a <Type> ... </Type> block.
Step one is to give your custom class an identifier. You do so by adding an "Id" tag, like this, making sure that it's unique within the dbml file:
<Type Name="MyCustomClass" Id="ID1">
Step two is to tell your function to use the newly ID'd type as the Return Type. You do so by replacing the line in your <Function> block that looks like
<Return Type="System.Int32" />
with
<ElementType IdRef="ID1" />
Save the file, exit, and rebuild. Done. Re-open the .dbml file in design mode to verify: Your procedure will now have the custom class set as the Return Type.
I had a similar mapping problem, but I found the culprit in my case.
If your procedure or any subprocedure that gets called has temporary objects like
CREATE TABLE #result (
ID INT,
Message VARCHAR(50)
)
then you're in trouble, even if you don't select anything of these temporaries.
The mapper has a general problem with these temporary objects, because the type can be changed outside the procedure in the session context. Temporary objetcs are not typesafe for the mapper and he refuses the usage os them.
Replace them by table variables and you're back in business
DECLARE #result AS TABLE (
ID INT,
Message VARCHAR(50)
)
I followed the link provided by Tony for a better solution (same answer as Arash's)
do read the blog, especially the last part, for there is a thing to consider when adding SET FMTONLY OFF in your stored procedure.
When you add
SET FMTONLY OFF
in the beginning of the stored procedure and load it to DBML,
LINQ2SQL would execute the actual stored procedure.
To get the correct return table object type,
said stored procedure must return something when called w/o parameter(s).
That means:
1. Have default value for all input parameters
2. Make sure SP returns at least a row of data -- this is where I stumbled
create table #test ( text varchar(50) );
insert into #test (text) values ('X'); -- w/o this line, return type would be Int32
select * from #test; -- SP returns something, proper object type would be generated
return;
Okay, I found the problem... kind of. I had changed the name of the table "Assignments" and forgot to update the stored procudure, so the DBML designer was confused. BUT even after I updated the stored procedure, deleted it from the DBML designer and readded it, it wasn't working!
This is nearly the same problem discussed here: http://forums.asp.net/t/1231821.aspx.
It only worked when I deleted the stored procedure from the database and recreated it, and deleted it from the DBML designer, recompiled, restarted Visual Studio, and added it again. This is the second time I've run into "refresh" problems with the Visual Studio DBML designer...
I managed to work out an easier way, which just wasn't obvious at the time, but sounds straight forward when written down:
Delete the stored procedure from the design surface of the .dbml file
Click Save All files
Click Refresh in Server Explorer on the list of Stored Procedures
Add (drag) the stored procedure back onto the design surface of the .dbml file
Click Save All
Click Build
Check the designer.cs code file and you will have the updated C# code for the new version of the stored procedure
check http://www.high-flying.co.uk/C-Sharp/linq-to-sql-can-t-update-dbml-file.html
I had the same problem, but only happens if my sp uses FTS, what i did was "cheat" the dbml designer, I remove the fts language stuff and works perfectly, now i can change the return type. Later i go to the sp and add the fts again and works pefectly!.
Hope this help.
The way to get around this issue is:
Add "set fmtonly off;" to the beginning of your stored procedure.
After adding that statement get DBML generate the code for your stored procedure.
If your stored procedure's return type is still 'int' in your DBML code, comment the entire code of stored procedure, create a new SELECT statement whose returning fields types and names match the original's SELECT statement and get DBML regenerate the code again. It has to work!
Thanks #Rubenz, I was also using FTS (Full-Text Search) in a stored procedure and your steps worked.
I commented the FTS section from the stored procedure, added the stored procedure to .dbml, and then uncommented the FTS section back to original.
This also happens when using sql user-defined types as parameters in stored procedures
http://alejandrobog.wordpress.com/2009/09/23/linq-to-sql-%e2%80%94-can%e2%80%99t-modify-return-type-of-stored-procedure/
Better solution found here: http://tonesdotnetblog.wordpress.com/2010/06/02/solution-my-generated-linq-to-sql-stored-procedure-returns-an-int-when-it-should-return-a-table/
OK, I didnt want to be changing anything in my Designer.cs code, I knew there was a different problem and it wasnt related to my stored procedure (I wasnt using temp table anyway).
Simply deleting the sp from the database and updating the model wasnt helping at all. New model created still had the same problems...
What I found is that for some reason a copies of my sp were created in DatabaseModel -> Function Imports.
What I did, I deleted the duplicated objects in Function Imports and updated the model. It worked!
Regards,
Chris
I realize that this is an old question but the above suggestions pointed me in the right direction but did not work in my case. I ended up editing the dbml file with the XML editor in Visual Studio as suggested above.
Once in the file, look for the Function section for the stored procedure. You will most likely not see the section – ElementType – which defines the return type. I began to edit the fields from another Function (stored procedure) and found that this was too tedious and may introduce issues.
I decided to delete all the Column definitions from the ElementType - but leave the ElementType section and save the file. I then deleted the stored procedure from the designer and re-added it. It then filled in the correct columns within the ElementType. Worked beautifully.