yii2 Updating Tables without primary key - yii2

I have a table named SYSTEM_PARAMS like below
+------------+------------------+------+-----+---------------------+-----------------------------+
| Field | Type | Null | Key | Default | Extra |
+------------+------------------+------+-----+---------------------+-----------------------------+
| name | varchar(50) | NO | | NULL | |
| value | varchar(100) | YES | | NULL | |
| updated_at | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP |
| created_at | timestamp | NO | | 0000-00-00 00:00:00 | |
| created_by | int(11) | NO | | NULL | |
| updated_by | int(11) unsigned | YES | | 0 | |
+------------+------------------+------+-----+---------------------+-----------------------------+
where i have all names of the cron jobs i have to run and update the value against the specific job name with the current running JobId, the table does not have the primaryKey defined as you can see the schema above, so i defined the method primaryKey() inside the model like below
public function primaryKey($asArray=FALSE) {
return 'name';
}
but this gives me error saying that i can define a static method as non-static , what am i doing wrong here.
PHP Fatal error: Cannot make static method
yii\db\ActiveRecord::primaryKey() non static in class
common\models\SystemParams

Exactly what it says.
primaryKey is static method in ActiveRecord class.
If you want to override it you have to make it static as well.
public static function primaryKey()
{
return ['name'];
}
PS. It must return array. See this note.

Related

Upsert in mariadb/mysqldb

I have created a table as below:
+------------------+--------------+------+-----+------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------------+--------------+------+-----+------------+----------------+
| category | varchar(20) | NO | | NULL | |
| report_id | int(5) | NO | PRI | NULL | auto_increment |
| name | varchar(255) | NO | UNI | NULL | |
| URL | varchar(200) | NO | | NULL | |
| refresh_type | varchar(30) | NO | | On Request | |
| create_dt | date | NO | | 9999-01-01 | |
| modified_dt | date | NO | | 9999-01-01 | |
| project_type | varchar(60) | YES | | NULL | |
| project_name | varchar(60) | YES | | NULL | |
| project_location | varchar(255) | YES | | NULL | |
+------------------+--------------+------+-----+------------+----------------+
I have inserted records into this as well. I am trying to automate the process of inserting and maintaining and I am getting only few columns like category, name and URL from a data feed.
According to the process, a user can update the records and change the other fields to make them useful, and the next time when I try to insert records, I want to perform upsert based on name. I am performing the process using python. here are the steps I tried:
dash= df.loc[:,['folder','name','url','url']].values.tolist()
dash_insert_sql= ("""insert into dashboards (category
,name
,URL
) values (%s,%s,%s)
on duplicate key update URL = values(%s) """)
cur.executemany(dash_insert_sql, dash)
When I try this, I am getting
Traceback (most recent call last):
File "C:\Python\Mysql\report_uri.py", line 65, in <module>
cur.executemany(dash_insert_sql, dash)
File "C:\Python\lib\site-packages\MySQLdb\cursors.py", line 228, in executemany
q_prefix = m.group(1) % ()
TypeError: not enough arguments for format string
Here is the example data:
My input is a list of lists.
[['Student Information', 'Active Students not Scheduled', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/af4f7325-860f-4958-ad83-bb900f726b32&page=vi6', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/af4f7325-860f-4958-ad83-bb900f726b32&page=vi6'], ['Student Information', 'Admissions Statistical Comparison from Snapshots', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/6150909f-3ab4-4ec7-8ef0-7efdb1f09300&page=vi6', 'https://example.com/SASReportViewer/?reportUri=/reports/reports/6150909f-3ab4-4ec7-8ef0-7efdb1f09300&page=vi6']]
Please let me know how to proceed or where I am going wrong. Thank you.
dash= df.loc[:,['folder','name','url']].values.tolist()
dash_insert_sql= ("""insert into dashboards (category
,name
,URL
) values (%s,%s,%s)
on duplicate key update URL = values(URL) """)
I tested it by changing the url of a report with that of another one and it did update the url and no new rows were added.

How to convert longblob to readable format

I'm using ara for my ansible project to stock playbook output into database (Mysql).
Some Tables are not readable i would like to know how to convert that in order to develop a php page to display thos values:
here's my table description :
mysql> desc data;
+-------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------+--------------+------+-----+---------+-------+
| id | varchar(36) | NO | PRI | NULL | |
| playbook_id | varchar(36) | YES | MUL | NULL | |
| key | varchar(255) | YES | | NULL | |
| value | longblob | YES | | NULL | |
| type | varchar(255) | YES | | NULL | |
+-------------+--------------+------+-----+---------+-------+
as you see the value column is longblob so the output is not clear:
mysql> select value from data;
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| value |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| xœmŽË
ƒ0å"¸kMÄG;àÊU«#±5b &!7
ýû&R
¥Åp3Ì$§'fåc’Â!{©” ¸x™ÁQ¢Í"¦ÐíùB©`€‹ãš
b%so­päjTëÌb÷j½9c<×ð_yÑ”»2øaó¢Ipíg;âOºd¬Û€~˜†xÆi~_À¡Ï¿[M“u¼`‘ó*´îáWòìI=N |
| xœmŽË
ƒ0å"¸³&â£f_påªU Ø1““R
¥Åp3Ì$Çæ0
˜ä}–Â!©” 8{™ÃA¢Í#¦Ð©`€«ãšŒb#Ë`­päbTçÌjwj»:c<×ð_EÙTY|ŸUÁË6µ_ì„?銱þôÃ4Äã0ÎËûŽCñÝjë˜lšà%‹\Ô¡u
¿’'ìÂ=O
i try to convert those data to use UTF-8 but it gives me null:
SELECT CONVERT(value USING utf8) FROM data;
+---------------------------+
| CONVERT(value USING utf8) |
+---------------------------+
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
| NULL |
+---------------------------+
17 rows in set, 18 warnings (0,00 sec)
I helped design some of those models :) (although I am no longer an active developer on the project).
Have you considered just using the Ara web interface as the UI for this data? It's generally a bad idea to poke directly at the database like this because it typically hasn't been designed as a stable API: there's an excellent chance that some future update will break your code, because the assumption is that only ARA is accessing the database.
In any case:
In order to save space, many of the values stored in the database are compressed using Python's zlib.compress method. This is handled by the CompressedData type, which looks like this:
class CompressedData(types.TypeDecorator):
"""
Implements a new sqlalchemy column type that automatically serializes
and compresses data when writing it to the database and decompresses
the data when reading it.
http://docs.sqlalchemy.org/en/latest/core/custom_types.html
"""
impl = types.LargeBinary
def process_bind_param(self, value, dialect):
return zlib.compress(encodeutils.to_utf8(jsonutils.dumps(value)))
def process_result_value(self, value, dialect):
if value is not None:
return jsonutils.loads(zlib.decompress(value))
else:
return value
def copy(self, **kwargs):
return CompressedData(self.impl.length)
You will need to use the zlib.decompress method -- or the php equivalent -- to read those values. I am not a PHP developer, but it looks as if PHP as a zlib module.

I want an other table to be updated when I create an entry

I want start_date and start_time copied into latest_time and latest_date, while adding a new entry into my logbook. But I want dependency on logbook.logbook_index_id = logbook_index.id for all entries too.
mysql> describe logbook;
+-------------------------------+-----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------------------+-----------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| logbook_index_id | int(10) unsigned | NO | | NULL | |
| start_date | date | NO | | NULL | |
| start_time | time | NO | | NULL | |
mysql> describe logbook_index;
+--------------------+----------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+--------------------+----------------------+------+-----+---------+----------------+
| id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| first_date | date | NO | | NULL | |
| first_time | time | NO | | NULL | |
| latest_date | date | NO | | NULL | |
| latest_time | time | NO | | NULL | |
+--------------------+----------------------+------+-----+---------+----------------+
atm I got this far ...
create trigger update_dates after insert on logbook
for each row update logbook_index
set latest_date = start_date where logbook_index.id = logbook_index_id;
I do it mostly wrong I bet. How does this work correctly and how do I get the time copied too ?
If I understood your question correctly:
For this I would suggest using a trigger
You can put an AFTER INSERT trigger on the table that you insert, inside the trigger you can put the update to the other table.
In order to access variables from the newly insert record, you need to do the following:
UPDATE logbook_index
SET latest_date = NEW.start_date
WHERE logbook_index.id = NEW.logbook_index_id;
Notice the keyword NEW that is used to access the newly insert record.
If you were using an AFTER UPDATE trigger, you could access the old values by using OLD
What you're searching for is a Trigger, a procedure that's automatically invoked in response to an event, in your case the insertion of a row in the logbook table.

MySQL - 'Updated By' column to automatically update with user who modified the row?

Here's my table.
+-------------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+----------------+
| ID | int(11) | NO | PRI | NULL | auto_increment |
| Postcode | varchar(255) | YES | | NULL | |
| Town | varchar(255) | YES | | NULL | |
| Region | varchar(255) | YES | | NULL | |
| Company Name | varchar(255) | YES | | NULL | |
| Fee | double | YES | | NULL | |
| Company Benefits | varchar(255) | YES | | NULL | |
| Date Updated | date | YES | | NULL | |
| Website | mediumtext | YES | | NULL | |
| Updated By | varchar(255) | YES | | NULL | |
| Notes | varchar(255) | YES | | NULL | |
| LNG | varchar(255) | YES | | NULL | |
| LAT | varchar(255) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+----------------+
You can see we have an "Updated by" column.
How can I make it so that, when a user updates the row, the "Updated By" column automatically updates (or inserts if it's a new row they're adding) with the currently logged-in users name?
Many Thanks
You will have to explicitly make sure about that and whenever an UPDATE is happening then you need to update that column as well saying below. Best way to assure it, have your application logic fill in the column whenever an UPDATE to the record is happening from current logged-in user principle or claim
update tbl1
set ...,
Updated By = <logged in user name>
where Id = <some val>
You can use USER() or CURRENT_USER() in Update or Insert statements to achieve needed effect.
From my side - the only one secure way is to create stored procedures, providing inserts or updates to desired table.
Indeed, this problem was discussed here:
mysql Set default value of a column as the current logged in user
Something like this !
CREATE TRIGGER `updater`.`tableName_BEFORE_INSERT` BEFORE INSERT ON `tableName`
FOR EACH ROW
BEGIN
Set New.Updated_By = current_user();
END

MYSQL error:com.mysql.jdbc.NotUpdatable

I am getting this error
javax.servlet.ServletException: com.mysql.jdbc.NotUpdatable: Result
Set not updatable.
I know this error is regarding the primary key but for all my tables I initially insert a primary key.So for this table also I have a primary key.I am posting part of my code.
Statement st=con.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE,ResultSet.CONCUR_UPDATABLE);
ResultSet rs=st.executeQuery("Select * from test3 order by rand() limit 5");
List arrlist = new ArrayList();
while(rs.next()){
String xa =rs.getString("display");
if(xa.equals("1")){
arrlist.add(rs.getString("question_text"));
}
rs.updateString("display", "0");
rs.updateRow();
Just tell me if something is going wrong in this code.please help.
This is my database
+----------------+---------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| index_question | varchar(45) | YES | | NULL | |
| question_no | varchar(10) | YES | | NULL | |
| question_text | varchar(1000) | YES | | NULL | |
| file_name | varchar(128) | YES | | NULL | |
| attachment | mediumblob | YES | | NULL | |
| display | varchar(10) | YES | | NULL | |
+----------------+---------------+------+-----+---------+----------------+
You have to update the row immediately after you have fetched it (FOR UPDATE and rs.updateRow(),
OR
you have to write an UPDATE tablename set = where statement to update a row at any time
The query can not use functions. Try removing the "rand()" from the SQL query string.
See the JDBC 2.1 API Specification, section 5.6 for more details.