Delete rows from Query Object - sqlalchemy

I was wondering if it's possible to delete some random rows from a Query Object before doing a bulk update.
Example:
writerRes = self.session.query(table)
writerRes = writerRes.filter(table.userID==3)
-> Delete some of the rows randomly
writerRes.update({"userID": 4})
Is there an easy way to do that?

Selecting random row with SA depends on the database. Based on that answer.
Postgresql and Sqlite3:
number_of_random_rows = 3
rand_rows = session.query(table.userid).order_by(func.random()).limit(number_of_random_rows).subquery()
session.query(table).filter(table.userid.in_(rand_rows)).delete(synchronize_session='fetch')
MySQL:
number_of_random_rows = 3
rand_rows = session.query(table.userid).order_by(func.rand()).limit(number_of_random_rows).subquery()
session.query(table).filter(table.userid.in_(rand_rows)).delete(synchronize_session='fetch')
...

Related

UPDATE query using multiple AND conditions slow in MYSQL

I am trying to update a table (~2 million rows) based on another table(10k rows). However, my update query is taking extremely long(30 mins) without any outputs as of yet. Is there a way to optimise this query?
UPDATE global_mobility_report
SET
global_mobility_report.locationID1 = (SELECT
geography.locationID
FROM
geography
WHERE
global_mobility_report.country_region = geography.country_region
AND global_mobility_report.sub_region_1 = geography.sub_region_1
AND global_mobility_report.sub_region_2 = geography.sub_region_2
AND global_mobility_report.metro_area = geography.metro_area
AND global_mobility_report.iso_3166_2_code = geography.iso_3166_2_code
AND global_mobility_report.census_fips_code = geography.census_fips_code);
UPDATE global_mobility_report
JOIN geography USING ( country_region,
sub_region_1,
sub_region_2,
metro_area,
iso_3166_2_code,
census_fips_code )
SET global_mobility_report.locationID1 = geography.locationID;
The presence of according index will improve.
The rows in global_mobility_report which have no according row in geography will not be updated (stay unchanged). If you need them to be set to NULL then use LEFT JOIN.
I simply indexed the country_region,
sub_region_1,
sub_region_2,
metro_area,
iso_3166_2_code,
census_fips_code columns and it worked like a charm!

Update multiple tables in one query, in MySQL

I want update my tables from csv. Now data from csv are imported to table "temp_update_stany", but i cant update tables. Query with no errors, but nothing is updated.
Table from CSV is:
produkt|quantity|price|active|czas
Query:
UPDATE lp2_product tabela
INNER JOIN lp2_stock_available stany ON (tabela.id_product = stany.id_product)
INNER JOIN lp2_product_lang lang ON (tabela.id_product = lang.id_product)
INNER JOIN temp_update_stany csv ON (tabela.id_product = csv.produkt)
SET
tabela.active = csv.active,
tabela.price = csv.price,
lang.available_now = csv.czas,
stany.quantity = csv.quantity
WHERE
csv.produkt = tabela.id_product
OR csv.produkt = lang.id_product
OR csv.produkt = stany.id_product
and output from query:
Modified records: 0 (Perform queries took 0.0322 seconds (s)).
but for example "lp2_product" /row 'active' have value 0 for all products and temp_update_stany have value 1 for all.
Yes, this is prestashop and simple script for update quantity and prices.
As per comments above, the UPDATE reports zero rows affected if there is no net change. So if the tables are already updated with the desired values, the UPDATE is a no-op and no rows are "affected."

Update mysql cell after fetching related cell value via select?

SQL:
$mysqli->query("UPDATE results
SET result_value = '".$row[0]['logo_value']."'
WHERE logo_id = '".$mysqli->real_escape_string($_GET['logo_id'])."'
AND user_id = '".$user_data[0]['user_id']."'");
This results table also contains result_tries I'd like to fetch before doing update, so I can use it to modify result_value... Is there a way to do it in a single shot instead of first doing select and than doing update?
Is this possible?
Basically:
UPDATE results SET result_value = result_value + $row[0][logo_value]
for just a simple addition. You CAN use existing fields in the record being updated as part of the update, so if you don't want just addition, there's not too many limits on what logic you can use instead of just x = x + y.

Mysql multi-table UPDATE first record

I have a permission system between two objects (users => firms) with table permissions for linking. Now i need to update firms table with first permission user id. I made this query:
UPDATE parim_firms, parim_permissions
SET parim_firms.firm_user_id = parim_permissions.permission_a_id
WHERE parim_firms.firm_user_id = 0
AND parim_firms.firm_id = parim_permissions.permission_b_id
Now if one firm hash multiple linked users, then will it be updated with the first or last matched user?
My logic says after first update firm_user_id != 0 and that row doesn't get updated anymore.
But im not sure, maybe does it run the query for all joined rows and the last row will stay.
And if it doesn't then how can i modify the query to update with only first matched result?
UPDATE parim_firms
SET parim_firms.firm_user_id =
(
select parim_permissions.permission_a_id from parim_permissions
WHERE parim_firms.firm_id = 0
AND parim_firms.firm_id = parim_permissions.permission_b_id
)
or
update parim_firms a
set a.firm_user_id = b.permission_a_id
from parim_permissions b
WHERE parim_firms.firm_id = 0
AND parim_firms.firm_id = parim_permissions.permission_b_id

Long time exection in update table with join in SQL server 2008

i'm facing a big problem when trying to update a table containing stock data put in join with a table containing product classification. This operation is taking long time for execution.
Table dw_giacenze (having flag_nomatch parameter equal to T) a is put on inner join with dw_key_prod z on ecat_key field.
a contains up to 3 milions records, z 150k records.
It takes more than 2 hours in execution.
Below the update query I'm using.
update dw_giacenze
set cate_ecat_key = z.cate_ecat_key,
sottocat_ecat_key = z.sottocat_ecat_key,
marchio_key = z.marchio_key,
sottocat_bi_key = z.sottocat_bi_key,
gruppo_bi_key = z.gruppo_bi_key,
famiglia_bi_key = z.famiglia_bi_key,
flag_nomatch = NULL
from dw_giacenze a
inner join dw_key_prod z on
z.ecat_key = a.ecat_key
where
a.flag_nomatch = 'T';
Can anyone help me in optimizing it?
Thanks in advance!
Enrico
I would suggest focusing in on a.flag_nomatch = 'T'.
A great way to get a really clear picture of what's going on is to use SQL Server Profiler. If this shows that your reads equals the number of rows in the table, then that's definitely an issue. Adding an index on flag_nomatch.
Alternatively, you could separate this out and update things individually (to start with)
UPDATE dw_giacenze
set sottocat_ecat_key = (SELECT sottocat_ecat_key
FROM dw_key_prod
WHERE dw_key_prod.ecat_key = dw_giacenze.ecat_key)
where
dw_giacenze.flag_nomatch = 'T';
I did notice that the first parameter in your set statement is actually the same parameter in your join. That means that you are setting it to the same exact value, so you should be able to remove that anyway.