I have a MySQL table table_foo with columns col1 of type DATETIME ,col2 of type int, both accepts NULL values but when inserted from Powershell throws error.
$oMYSQLCommand.CommandText='INSERT into `table_foo` (`col1`,`col2`) VALUES("' + $null + '", "' + $null + '")'
$iRowsAffected=$oMYSQLCommand.ExecuteNonQuery()
also tried using [DBNull]::Value as ref in
$oMYSQLCommand.CommandText='INSERT into `table_foo` (`col1`,`col2`) VALUES("' + [DBNull]::Value + '", "' + $null + '")'
Error
Error: Exception calling "ExecuteNonQuery" with "0" argument(s): "Incorrect datetime value: '' for column 'col1' at row 1"
Since you're constructing a string into which the null values are to be embedded, you must represent the null values using SQL syntax, which means: verbatim NULL (without surrounding single quotes):
$oMYSQLCommand.CommandText =
'INSERT into `table_foo` (`col1`,`col2`) VALUES (NULL, NULL)'
If the values come from PowerShell variables that are situationally $null and must therefore conditionally be translated to verbatim NULL, the best approach is to use a helper function, as demonstrated in this answer.
In the simplest case, with all variables containing strings, if you want to treat an empty string or a $null value as a SQL NULL,
you could define a function tf (short for: transform) as follows:
function tf ($val) {
if ([string]::IsNullOrEmpty($val)) { 'NULL' } else { "'{0}'" -f $val }
}
You could then use the function as follows, with $var1 and $var2 containing the values to embed:
$oMYSQLCommand.CommandText =
'INSERT into `table_foo` (`col1`,`col2`) VALUES ({0}, {1})' -f
(tf $var1), (tf $var2)
One more simple way of #mklement0's answer is. So the issue here is MySQL accepts variable values with quotes and NULL without quotes. If NULL in enclosed in quotes it is treated as string value. So while using variable concatenate quotes " if the variable value is not NULL
if($var_x -eq ""){$var_x ="NULL"}else{'var_x = '"'+ $var_x +'"'}
if($var_y -eq ""){$var_y ="NULL"}else{'var_y = '"'+ $var_y +'"'}
$oMYSQLCommand.CommandText='INSERT into `table_foo` (`col1`,`col2`) VALUES(' + $var_x + ',' + $var_y + ')'
$iRowsAffected=$oMYSQLCommand.ExecuteNonQuery()
try typing NULL, dont use variable:
VALUES('NULL','NULL')
something like this:
$oMYSQLCommand.CommandText="INSERT into `table_foo' (`col1`,`col2`) VALUES('NULL','NULL')"
Related
When I do the query below from a mysql console it works fine but when I wrap into perl/DBI I get:
DBD::mysql::st execute failed: Unknown column 'AR_email' in 'field list'
Here is the query:
my $q = "SELECT SUBSTRING(AR_email, LOCATE('#', AR_email) + 1) AS domain
FROM carrier
WHERE AR_email IS NOT NULL
AND SUBSTRING(AR_email, LOCATE('#', AR_email) + 1) =?";
my $sth=$dbh->prepare($q);
$sth->execute($domain);
Any idea how I can fix this?
The problem here is unintented string interpolation.
You are using double quotes (") when assigning the query string to $q, but it contains an arobas (#), which is interpolated.
So $q actually ends up containing:
SELECT SUBSTRING(AR_email, LOCATE(', AR_email) + 1) AS domain
FROM carrier
WHERE AR_email IS NOT NULL
AND SUBSTRING(AR_email, LOCATE(', AR_email) + 1) =?
If you were running this under use warnings, you would get this message:
Possible unintended interpolation of #' in string
One way to solve this is to define the query as a litteral string, for example:
my $q = q/SELECT SUBSTRING(AR_email, LOCATE('#', AR_email) + 1) AS domain
FROM carrier
WHERE AR_email IS NOT NULL
AND SUBSTRING(AR_email, LOCATE('#', AR_email) + 1) =?/;
Moral of the story:
use double quotes only when you do want string interpolation
always use strict; use warnings;
I'm a beginner to SQL (as you'll soon be able to tell...) and I cannot for the life of me figure out how to best insert a simple JSON array of strings into a PostgreSQL table. I suspect the solution is quite easy but I've spent just a bit too long trying to puzzle it out on my own.
First I create the table:
CREATE TABLE test (
id serial PRIMARY KEY
my_array jsonb
);
Where the array is of type JSON. Insert some initial data:
INSERT INTO test (id, my_array) VALUES(1, '[]');
And now I want to update the myarray column with a JSON array using Node.js node-postgres. The array might look something like
const myArray = ['foo', 'bar', 'foobar\'s escaped character emporium'];
await db.none(
'UPDATE test ' +
`SET my_array = ${myArray} ` +
'WHERE id = 1'
);
This results in
error: syntax error at or near ","
Ok, so what if I do
await db.none(
'UPDATE test ' +
`SET my_array = "${myArray}" ` +
'WHERE id = 1'
);
I get
error: column "foo,bar,foobar's escaped character emporium" does not exist
and if I do
await db.none(
'UPDATE test ' +
`SET my_array = ${JSON.stringify(myArray)} ` +
'WHERE id = 1'
);
I get
ERROR error: syntax error at or near "["
Finally, if I do
await db.none(
'UPDATE test ' +
`SET my_array = '${JSON.stringify(myArray)}' ` +
'WHERE id = 1'
);
I end up with
stack=error: syntax error at or near "s"
I've also tried storing the data data as a native PostgreSQL array but I encounter similar problems:
CREATE TABLE test (
id serial PRIMARY KEY
my_array text ARRAY
);
INSERT INTO test (id, my_array) VALUES(1, '{}');
Then
const myArray = ['foo', 'bar', 'foobar\'s escaped character emporium'];
await db.none(
'UPDATE test ' +
`SET my_array = ${myArray} ` +
'WHERE id = 1'
);
gives
stack=error: syntax error at or near ","
Similar variations using JSON.stringify() and combinations of different quotes have proved fruitless as well. I kind of expected this approach to be less likely to work as PostgreSQL arrays are just a different format, but I was hoping there might be some kind of attempt at coercion. Reading through the documentation I can't spot any obvious way to convert a JSON array into the expected format for a PostgreSQL array.
Consider using a Parameterized query or Prepared statements.
That will help with you with qoutes and get protection against SQL injection as a bonus.
I'm trying to construct a JSON-serialized list of key/value pair items from my SQL database (compat level 140). The trick is that the values can be anything: numbers, strings, null, or other JSON objects.
It should be able to look something like this:
[{"key":"key1","value":"A String"},{"key":"key2","value":{"InnerKey":"InnerValue"}}]
However, SQL seems to be forcing me to select either a string or an object.
SELECT
[key] = kvp.[key],
[value] = CASE
WHEN ISJSON(kvp.[value]) = 1 THEN JSON_QUERY(kvp.[value])
ELSE '"' + kvp.[value] + '"' -- See note below
END
FROM (VALUES
('key1', 'This value is a string')
,('key2', '{"description":"This value is an object"}')
,('key3', '["This","value","is","an","array","of","strings"]')
,('key4', NULL)
-- Without these lines, the above 4 work fine; with either of them, even those 4 are broken
--,('key5', (SELECT [description] = 'This value is a dynamic object' FOR JSON PATH, WITHOUT_ARRAY_WRAPPER))
--,('key6', JSON_QUERY((SELECT [description] = 'This value is a dynamic object' FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)))
) AS kvp([key], [value])
FOR JSON PATH
Am I trying to do something that SQL can't support, or am I just missing the proper syntax for making this work?
*Note that the addition of the double-quotes seems like it shouldn't be necessary. But without those, SQL fails to wrap the string and generates bad JSON:
[{"key":"key1","value":This value is a string},...
If your query is modified to this, it works:
SELECT
[key] = kvp.[key],
[value] = ISNULL(
JSON_QUERY(CASE WHEN ISJSON(kvp.[value]) = 1 THEN kvp.[value] END),
'"' + STRING_ESCAPE(kvp.[value], 'json') + '"'
)
FROM (VALUES
('key1', 'This value is a "string"')
,('key2', '{"description":"This value is an object"}')
,('key3', '["This","value","is","an","array","of","strings"]')
,('key4', NULL)
-- These now work
,('key5', (SELECT [description] = 'This value is a dynamic object' FOR JSON PATH, WITHOUT_ARRAY_WRAPPER))
,('key6', JSON_QUERY((SELECT [description] = 'This value is a dynamic object' FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)))
) AS kvp([key], [value])
FOR JSON PATH, INCLUDE_NULL_VALUES
Of course, this wouldn't be sufficient if value was an int. Also, I can't really explain why yours doesn't work.
For example:
SET #key = '["a","b"]';
SELECT JSON_SEARCH(#key, 'one', 'b');
...will return the path:
"$[1]"
Insert this as the path in JSON_EXTRACT like:
SET #value = '["1","2"]';
SELECT JSON_EXTRACT(#value, "$[1]");
...this will return the value:
"2"
But if I write following:
SET #key = '["a","b"]';
SET #value = '["1","2"]';
SET #path = (SELECT JSON_SEARCH(#key, 'one', 'b'));
SELECT JSON_EXTRACT(#value, #path);
...this will drop an error:
SQL Fehler (3143): Invalid JSON path expression. The error is around character position 1 in '"$[1]"'.
Trimming the double quotes works, but I don't like this solution:
SELECT JSON_EXTRACT(#value, TRIM(BOTH '"' FROM #path));
Is there an other way or am I missing something?
JSON_PATH returns a JSON object (a JSON string) that needs to be unquoted when used as string:
SELECT JSON_EXTRACT(#value, JSON_UNQUOTE(#path));
"2"
How can I export SQL Server database diagrams as developer-friendly SQL scripts?
By developer-friendly, I mean written in a way similar to the way a human would write them as opposed to the messy many-UPDATEs style used by existing solutions.
(Note that similar questions on this site only seem to cover specific versions of SQL Server or migration of diagrams.)
Here's a script to do this. Tested in SQL Server 2008 R2 and 2012.
DECLARE #values nvarchar(max);
SET #values =
(
SELECT '
(''' + REPLACE(name, '''', '''''') + ''', ' + CAST(principal_id AS VARCHAR(100)) +', ' + CAST(version AS VARCHAR(100)) + ', ' + sys.fn_varbintohexstr(definition) + '),'
FROM sysdiagrams
FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)');
SET #values = LEFT(#values, LEN(#values) - 1);
SELECT
'IF OBJECT_ID(N''dbo.sysdiagrams'') IS NULL
CREATE TABLE dbo.sysdiagrams
(
name sysname NOT NULL,
principal_id int NOT NULL,
diagram_id int PRIMARY KEY IDENTITY,
version int,
definition varbinary(max)
CONSTRAINT UK_principal_name UNIQUE
(
principal_id,
name
)
);
MERGE sysdiagrams AS Target
USING
(
VALUES' + #values + '
) AS Source (name, principal_id, version, definition)
ON Target.name = Source.name
AND Target.principal_id = Source.principal_id
WHEN MATCHED THEN
UPDATE SET version = Source.version, definition = Source.definition
WHEN NOT MATCHED BY Target THEN
INSERT (name, principal_id, version, definition)
VALUES (name, principal_id, version, definition);
';
It basically exports the contents of the sysdiagrams table. Note that it does not retain the diagrams' id numbers. It also retains who created the diagrams, but the id number should also exist in the target database.
If you run the resultant script on a server instance that doesn't have the database diagramming objects, it should still work. However, after doing this, in order for them to appear in SSMS, I think you'll need to expand the Database Diagrams node and click Yes when asked to create them.
This is based on the 2008 script from here.
Note that there is a catch! SSMS and other Microsoft tools truncate the resulting text in the result set if you have more than a few diagrams. To get the full text, here's a PowerShell script to run the query and put the output in the clipboard:
$ErrorActionPreference = "Stop"
function Pause([string]$message) {
Write-Host $message
$host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | Out-Null
}
function Set-Clipboard {
$input | PowerShell -NoProfile -STA -Command {
Add-Type -AssemblyName "System.Windows.Forms"
[Windows.Forms.Clipboard]::SetText($input)
}
}
$connection = New-Object System.Data.SqlClient.SqlConnection ("Data Source=DATABASE_INSTANCE;Initial Catalog=DATABASE;Integrated Security=SSPI")
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText = #"
--SQL CODE
"#
$command.CommandTimeout = 60
$result = $command.ExecuteScalar()
$command.Dispose()
$connection.Dispose()
Pause "Press any key to copy the resulting SQL to the clipboard..."
$result | Set-Clipboard
Fill in the database, instance name, and SQL placeholders.
#Sam's answer is 100% correct and works (against 2019, too), and you still must do as he says: executing using Powershell
His original PS script features a Pause near the end, and that bugs-out for me when I run the script in Powershell ISE (probably due to my own naivete)
So, here's my slightly changed PS script (with the SQL embedded already) that is a near direct steal of #Sam's lovely work
$ErrorActionPreference = "Stop"
function Set-Clipboard {
$input | PowerShell -NoProfile -STA -Command {
Add-Type -AssemblyName "System.Windows.Forms"
[Windows.Forms.Clipboard]::SetText($input)
}
}
$connection = New-Object System.Data.SqlClient.SqlConnection ("Data Source=localhost;Initial Catalog=MySpecialDataBase;Integrated Security=SSPI")
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText = #"
DECLARE #values nvarchar(max);
SET #values =
(
SELECT '
(''' + REPLACE(name, '''', '''''') + ''', ' + CAST(principal_id AS VARCHAR(100)) +', ' + CAST(version AS VARCHAR(100)) + ', ' + sys.fn_varbintohexstr(definition) + '),'
FROM sysdiagrams
FOR XML PATH(''), TYPE
).value('.', 'nvarchar(max)');
SET #values = LEFT(#values, LEN(#values) - 1);
SELECT
'IF OBJECT_ID(N''dbo.sysdiagrams'') IS NULL
CREATE TABLE dbo.sysdiagrams
(
name sysname NOT NULL,
principal_id int NOT NULL,
diagram_id int PRIMARY KEY IDENTITY,
version int,
definition varbinary(max)
CONSTRAINT UK_principal_name UNIQUE
(
principal_id,
name
)
);
MERGE sysdiagrams AS Target
USING
(
VALUES' + #values + '
) AS Source (name, principal_id, version, definition)
ON Target.name = Source.name
AND Target.principal_id = Source.principal_id
WHEN MATCHED THEN
UPDATE SET version = Source.version, definition = Source.definition
WHEN NOT MATCHED BY Target THEN
INSERT (name, principal_id, version, definition)
VALUES (name, principal_id, version, definition);
';
"#
$command.CommandTimeout = 60
$result = $command.ExecuteScalar()
$command.Dispose()
$connection.Dispose()
$result | Set-Clipboard
echo "Your SQL Diagram was successfully scripted out and copied to the clipbaord"
Just launch your Windows PowerShell ISE, paste this into the top pane, update the connection string ~ line 10. Save the script. Run the script.