Mysql database column constraint error - mysql

I am having a problem and I would really love if I could have some help. I am using a php application to interact with the database. I have a database, which is working perfectly. However when I backed it up and moved it to another PC it started to act up. It is identical to the original. I have a table called Authorize, and a column authorized with a default not being null, however when I try to update the authorized column following message comes up (On the original system it still works fine, I can't seem to find the problem).
Error: Column 'authorized' cannot be null
sql: update `authorized` set `authorized` = :authorized where `authorized_id` = :identity_id;
arr_sql_param: Array
(
[:identity_id] => 22
[:authorized] =>
)
Sent From: update_grid()

Reading your code:
public function sql_update() {
...
// make sql statement from key and values in $_POST data
foreach($_POST as $key => $val) {
$sql_set .= "`$key` = :$key, ";
$sql_param[":$key"] = $this->cast_value($val, $key);
...
// posted values are saved here for pdo execute
$sql_param = array();
$sql_param[':identity_id'] = $identity_id;
...
$sql_final = "update `$this->table` set $sql_set where `$this->identity_name` = :identity_id;";
...
And the error:
Error: Column 'authorized' cannot be null
sql: update authorized set authorized = :authorized where authorized_id = :identity_id;
I realize that indeed :authorized is not set or included in the SQL statement explicitly.
Which leads to two possible conclusions:
When the column cannot be NULL in this environment but the same code works fine on your development system (your PC), then the database scheme may be different on those two systems.
On the new environment, the authorized column in table authorized is defined NOT NULL while on your dev environment you don't have this constraint.
Compare SHOW CREATE TABLE authorized from both systems to see if this is true.
since the column value for authorized is coming from the $_POST array .. is it possible that it's just not posted by the browser for some reason? Can't find a reason for that in your code, though.

Related

Conditional duplicate key updates with MySQL using Peewee

I have a case where I need to use conditional updates/inserts using peewee.
The query looks similar to what is shown here, conditional-duplicate-key-updates-with-mysql
As of now, what I'm doing is, do a get_or_create and then if it is not a create, check the condition in code and call and insert with on_conflict_replace.
But this is prone to race conditions, since the condition check happens back in web server, not in db server.
Is there a way to do the same with insert in peewee?
Using: AWS Aurora-MySQL-5.7
Yes, Peewee supports the ON DUPLICATE KEY UPDATE syntax. Here's an example from the docs:
class User(Model):
username = TextField(unique=True)
last_login = DateTimeField(null=True)
login_count = IntegerField()
# Insert a new user.
User.create(username='huey', login_count=0)
# Simulate the user logging in. The login count and timestamp will be
# either created or updated correctly.
now = datetime.now()
rowid = (User
.insert(username='huey', last_login=now, login_count=1)
.on_conflict(
preserve=[User.last_login], # Use the value we would have inserted.
update={User.login_count: User.login_count + 1})
.execute())
Doc link: http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert

Use same mysqli prepared statement for different queries?

Throughout some testings; a little question popped up. When I usually code database updates; I usually do this via callbacks which I code in PHP; to which I simply pass a given mysqli connection object as function argument. Executing all queries of for example three queries across the same single connection proved to be much faster than if closing and reopening a DB connection for each query of a given query sequence. This also works easily with SQL transactions, the connection can be passed along to callbacks without any issues.
My question is; can you also do this with prepared statement objects ? What I mean is, considering we successfully established a $conn object, representing the mysqli connection, is stuff like this legit? :
function select_users( $users_id, $stmt ) {
$sql = "SELECT username FROM users where ID = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "i", $users_id );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
function select_labels( $artist, $stmt ) {
$sql = "SELECT label FROM labels where artist = ?";
mysqli_stmt_prepare( $stmt, $sql );
mysqli_stmt_bind_param( $stmt, "s", $artist );
mysqli_stmt_execute( $stmt );
return mysqli_stmt_get_result( $stmt );
}
$stmt = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt );
$rappers = select_labels( "rapperxyz", $stmt );
or is it bad practice; and you should rather use:
$stmt_users = mysqli_stmt_init( $conn );
$stmt_rappers = mysqli_stmt_init( $conn );
$users = select_users( 1, $stmt_users );
$rappers = select_labels( "rapperxyz", $stmt_rappers );
During the testing; I noticed that the method by using a single statement object passed along callbacks works for server calls where I call like 4 not too complicated DB queries via the 4 according callbacks in a row.
When I however do a server call with like 10 different queries, sometimes (yes, only sometimes; for pretty much the same data used across the different executions; so this seems to be weird behavior to me) I get the error "Commands out of sync; you can't run this command now" and some other weird errors I've never experienced, like the amount of variables not matching the amount of parameters; although they prefectly do after checking them all. The only way to fix this I found after some research was indeed by using different statement objects for each callback. So, I just wondered; should you actually ALWAYS use ONE prepared statement object for ONE query, which you then may execute N times in a row?
Yes.
The "commands out of sync" error is because MySQL protocol is not like http. You can't send requests any time you want. There is state on the server-side (i.e. mysqld) that is expecting a certain sequence of requests. This is what's known as a stateful protocol.
Compare with a protocol like ftp. You can do an ls in an ftp client, but the list of files you get back depends on the current working directory. If you were sharing that ftp client connection among multiple functions in your app, you don't know that another function hasn't changed the working directory. So you can't be sure the file list you get from ls represents the directory you thought you were in.
In MySQL too, there's state on the server-side. You can only have one transaction open at a time. You can only have one query executing at a time. The MySQL client does not allow you to execute a new query where there are still rows to be fetched from an in-progress query. See Commands out of sync in the MySQL doc on common errors.
So if you pass your statement handle around to some callback functions, how can that function know it's safe to execute the statement?
IMO, the only safe way to use a statement is to use it immediately.

Populating Autocomplete list with new tags

I'm using Lucee 5.x and Maria DB (MySQL).
I have a user supplied comma delimited list. I need to query the database and if the item isn't in the database, I need to add it.
user supplied list
green
blue
purple
white
database items
black
white
red
blue
pink
orange
lime
It is not expected that the database list would grow to more than 30 items but end-users always find 'creative' ways to use the tools we provide them.
So using the user supplied list above, only green and purple should be added to the database.
Do I compare the user supplied list against the database items or vice versa? Would the process change if the user supplied list count exceeds what is in the database (meaning if the user submits 10 items and the database only contains 5 items)? I'm not sure which loop is the better way to determine which items are new. Needs to be in cfscript and I'm looking at the looping options as outlined here (https://www.petefreitag.com/cheatsheets/coldfusion/cfscript/)
FOR Loop
FOR IN Loop (Array)
FOR IN Loop (Query)
I tried MySQL of NOT IN but that left me with the existing database values in addition to the new ones. I know this should be simple and I'm over complicating this somewhere and/or am too close to the problem to see the solution.
You could do this:
get a list with existing items from database
append user supplied list
remove duplicates
update db if items were added
<cfscript>
var userItems = '"green","blue","purple","white"';
var dbItems = '"black","white","red","blue","pink","orange","lime"';
var result = ListRemoveDuplicates( ListAppend(dbItems, userItems));
if (ListLen(result) neq ListLen(dbItems)) {
// update db
}
</cfscript>
Update (only new items)
<cfscript>
var userItems = '"green","blue","purple","white"';
var dbItems = '"black","white","red","blue","pink","orange","lime"';
var newItems = '';
ListEach(userItems, function (item) {
if (not ListFind(dbItems, item)) {
newItems = ListAppend(newItems, item);
}
})
</cfscript>
trycf.com gist:
(https://trycf.com/gist/f6a44821165338b3c10b7808606979e6/lucee5?theme=monokai)
Again, since this is an operation that the database can do, I'd feed the input data to the database and then let it decide how to deal with multiple keys. I don't recommend using CF to loop through your values to check them and then doing the INSERT. This will require multiple trips to the database and then processing on the application server that isn't really needed.
My suggestion is to use MariaDB's INSERT....ON DUPLICATE KEY UPDATE... syntax. This will also require that whatever field you are trying to insert on actually has a UNIQUE constraint on it. Without that constraint, then your database itself doesn't care if you have duplicate data, when can cause its own set of issues.
For the database, we have
CREATE TABLE t1 (mycolor varchar(50)
, CONSTRAINT constraint_mycolor UNIQUE (mycolor)
) ;
INSERT INTO t1(mycolor)
VALUES ('black'),('white'),('red'),('blue'),('pink'),('orange'),('lime')
;
The ColdFusion is:
<cfscript>
myInputValues = "green,blue,purple,white" ;
myQueryValues = "" ;
function sanitizeValue ( String inVal required ) {
// do sanitization stuff here
var sanitizedInVal = arguments.inVal ;
return sanitizedInVal ;
}
myQueryValues = myInputValues.listMap(
function(i) {
return "('" & sanitizeValue(i) & "')" ;
}
) ;
// This will take parameterization out of the cfquery tag and
preform sanitization and validation before building the
query string.
myQuery = new query();
myQuery.name = "myQuery";
myQuery.setDataSource("dsn");
sqlString = "INSERT INTO t1(mycolor) VALUES "
& myQueryValues
& " ON DUPLICATE KEY UPDATE mycolor=mycolor;"
;
myQuery.setSQL(sqlString);
myQueryResult = myQuery.execute().getResult();
</cfscript>
First, build up your input values (myInputValues). You'll want to do validation and sanitization on them to prevent nastiness from entering your database. I created a sanitizeValue function to be the placeholder for the sanitization and validation operations.
myQueryValues will become a string list of the values in the proper format that we will use to insert into the database.
Then we just build up a new query(), using myQueryValues in the sqlString to get our query. Again, since we are building a string for multiple values to INSERT, I don't think there's a way to user queryparam for those VALUES. But since we cleaned up our string earlier, it should do much of what cfqueryparam does anyway.
We use MariaDB's INSERT INTO .... ON DUPLICATE KEY UPDATE ... syntax to only insert unique values. Again, this requires that the database itself has a constraint to prevent duplicates in whatever column we're inserting.
For a demo: https://dbfiddle.uk/?rdbms=mariadb_10.2&fiddle=4308da3addb9135e49eeee451c6e9e58
This should do what you're looking to do without beating up on your database too much. I don't have a Lucee or MariaDB server set up to test, so you'll have to give it a shot and see how it performs. I don't know how big your database is or will become, but this should still query pretty quickly.

Pass custom variables in MySQL connection

I am setting up a MySQL connection (in my case PDO but it shouldn't matter) in a REST API.
The REST API uses an internal authentication (username / password). There are multiple user groups accessing the REST API, e.g. customers, IT, backend, customer service. They all use the same MySQL connection in the end because they also use the same end points most of the time.
In the MySQL database I would like to save the user who is responsible for a change in a data set.
I would like to implement this on the MySQL layer through a trigger. So, I have to pass the user information from the REST API to this trigger somehow. There are some MySQL calls like CURRENT_USER() or status that allow to query for meta-information. My idea was to somehow pass additional information in the connection string to MySQL, so that I don't have to use different database users but I am still able to retrieve this information from within the trigger.
I have done some research and don't think it is possible, but since it would facilitate my task a lot, I still wanted to ask on SO if someone did know a solution for my problem.
I would set a session variable on connect.
Thanks to the comment from #Álvaro González for reminding me about running a command on PDO init.
The suggestion of adding data to a temp table isn't necessary. It's just as good to set one or more session variables, assuming you just need a few scalars.
$pdo = new PDO($dsn, $user, $password, [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::MYSQL_ATTR_INIT_COMMAND => "SET #myvar = 'myvalue', #myothervar = 'othervalue'"
]);
It's also possible to set session variables at any time after connect, with a call to $pdo->exec().
$pdo->exec("SET #thirdvar = 1234");
You can read session variables in your SQL queries:
$stmt = $pdo->query("SELECT #myvar, #myothervar");
foreach ($stmt->fetchAll(PDO::FETCH_ASSOC) as $row) {
print_r($row);
}
You can also read session variables in triggers:
CREATE TRIGGER mytrig BEFORE INSERT ON mytable
FOR EACH ROW
SET NEW.somecolumn = #myvar;

Why does Salesforce prevent me from creating a Push Topic with a query that contains relationships?

When I execute this code in the developer console
PushTopic pushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'Test';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Account.Name FROM Case';
insert pushTopic;
System.debug('Created new PushTopic: '+ pushTopic.Id);
I receive this message:
FATAL ERROR System.DmlException: Insert failed. First exception on row
0; first error: INVALID_FIELD, relationships are not supported:
[QUERY]
The same query runs fine on the Query Editor, but when I assign it to a Push Topic I get the INVALID_FIELD exception.
If the bottom line is what the exception message says, that relationships are just not supported by Push Topic objects, how do I create a Push Topic object that will return the data I'm looking for?
Why
Salesforce prevents this because it will require them to join tables, joins in salesforces database are expensive due to the multi-tenancy. Usually when they add a new feature they will not support joins as it requires more optimization of the feature.
Push Topics are still quite new to the system and need to be real time, anything that would slow them down I'd say needs to be trimmed.
I'd suggest you look more closely at your requirement and see if there is something else that will work for you.
Workaround
A potential workaround is to add a Formula field to the Case object with the data you need and include that in the query instead. This may not work as it will still require a join to work.
A final option may be to use a workflow rule or trigger to update the account name to a custom field on the Case object this way the data is local so doesn't require a join...
PushTopics support a very small subset of SOQL queries, see more here:
https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/unsupported_soql_statements.htm
However this should work:
PushTopic casePushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'CaseTopic';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Account.Id FROM Case';
insert pushTopic;
PushTopic accountPushTopic = new PushTopic();
pushTopic.ApiVersion = 23.0;
pushTopic.Name = 'AccountTopic';
pushTopic.Description = 'test';
pushtopic.Query = 'SELECT Id, Name FROM Account';
insert pushTopic;
It really depends on your use case though, if it is for replicating into RDBMS this should be enough, you can use a join to get the full data.