Insert values into SQL column with mysql-python - mysql

I am trying to insert values into a column of a SQL table, using MySQLdb in Python 2.7. I am having problems with the command to insert a list into 1 column.
I have a simple table called 'name' as shown below:
+--------+-----------+----------+--------+
| nameid | firstname | lastname | TopAdd |
+--------+-----------+----------+--------+
| 1 | Cookie | Monster | |
| 2 | Guy | Smiley | |
| 3 | Big | Bird | |
| 4 | Oscar | Grouch | |
| 5 | Alastair | Cookie | |
+--------+-----------+----------+--------+
Here is how I created the table:
CREATE TABLE `name` (
`nameid` int(11) NOT NULL AUTO_INCREMENT,
`firstname` varchar(45) DEFAULT NULL,
`lastname` varchar(45) DEFAULT NULL,
`TopAdd` varchar(40) NOT NULL,
PRIMARY KEY (`nameid`)
) ENGINE=InnoDB AUTO_INCREMENT=16 DEFAULT CHARSET=utf8
Here is how I populated the table:
INSERT INTO `test`.`name`
(`firstname`,`lastname`)
VALUES
("Cookie","Monster"),
("Guy","Smiley"),
("Big","Bird"),
("Oscar","Grouch"),
("Alastair","Cookie");
DISCLAIMER: The original source for the above MySQL example is here.
Here is how I created the a new column named TopAdd:
ALTER TABLE name ADD TopAdd VARCHAR(40) NOT NULL
I now have a list of 5 values that I would like to insert into the column TopAdd as the values of that column. Here is the list.
vals_list = ['aa','bb','cc','dd','ee']
Here is what I have tried (UPDATE statement inside loop):
vals = tuple(vals_list)
for self.ijk in range (0,len(self.vals)):
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
I get the following error message:
Traceback (most recent call last):
File "C:\Python27\mySQLdbClass.py", line 70, in <module>
[Finished in 0.2s with exit code 1]main()
File "C:\Python27\mySQLdbClass.py", line 66, in main
db.mysqlconnect()
File "C:\Python27\mySQLdbClass.py", line 22, in mysqlconnect
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
File "C:\Python27\lib\site-packages\MySQLdb\cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "C:\Python27\lib\site-packages\MySQLdb\connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
_mysql_exceptions.OperationalError: (1054, "Unknown column 'aa' in 'field list'")
Is there a way to insert these values into the column with a loop or directly as a list?

Try This:
vals_list = ['aa','bb','cc','dd','ee']
for i, j in enumerate(vals_list):
self.cursor.execute(("UPDATE test.name SET TopAdd = '%s' WHERE nameid = %s" % (str(j),int(i+1))

One problem is here:
for self.ijk in range (0,len(self.vals)):
The range function is creating a list of integers (presumably, the list [0, 1, 2, 3, 4]). When iterating over a collection in a for loop, you bind each successive item in the collection to a name; you do not access them as attributes of an instance. (It also seems appropriate to use xrange here; see xrange vs range.) So the self reference is non-sensical; beyond that, ijk is a terrible name for an integer value, and there's no need to supply the default start value of zero. KISS:
for i in range(len(self.vals)):
Not only does this make your line shorter (and thus easier to read), using i to represent an integer value in a loop is a convention that's well understood. Now we come to another problem:
self.cursor.execute ("UPDATE name SET TopAdd = %s WHERE 'nameid' = %s" % (self.vals[self.ijk],self.ijk+1))
You're not properly parameterizing your query here. Do not follow this advice, which may fix your current error but leaves your code prone to wasted debugging time at best, SQL injection and/or data integrity issues at worst. Instead, replace the % with a comma so that the execute function does the work safely quoting and escaping parameters for you.
With that change, and minus the quotation marks around your column name, nameid:
query = "UPDATE name SET TopAdd = %s WHERE nameid = %s;"
for i in range(len(self.vals)):
self.cursor.execute(query, (self.vals[i], i + 1))
Should work as expected. You can still use enumerate as suggested by the other answer, but there's no reason to go around typecasting everything in sight; enumerate is documented and gives exactly the types you already want:
for i, val in enumerate(self.vals):
self.cursor.execute(query, (val, i + 1))

Related

How do I preserve utf-8 JSON values and write them correctly to a utf-8 txt file in Python 3.8.2?

I recently wrote a python script to extract some data from a JSON file and use it to generate some SQL Insert values for the following statement:
INSERT INTO `card`(`artist`,`class_pid`,`collectible`,`cost`, `dbfid`, `api_db_id`, `name`, `rarity`, `cardset_pid`, `cardtype`, `attack`, `health`, `race`, `durability`, `armor`,`multiclassgroup`, `text`) VALUES ("generated entry goes here")
The names of some of the attributes are different in my SQL table but the same values are used (example cardClass in the JSON file/Python script is referred to as class_pid in the SQL table). The values generated from the script are valid SQL and can successfully be inserted into the database, however I noticed that in the resulting export.txt file some of the values changed from what they originally were. For example the following JSON entries from a utf-8 encoded JSON file:
[{"artist":"Arthur Bozonnet","attack":3,"cardClass":8,"collectible":1,"cost":2,"dbfId":2545,"flavor":"And he can't get up.","health":2,"id":"AT_003","mechanics":["HEROPOWER_DAMAGE"],"name":"Fallen Hero","rarity":"RARE","set":1,"text":"Your Hero Power deals 1 extra damage.","type":"MINION"},{"artist":"Ivan Fomin","attack":2,"cardClass":11,"collectible":1,"cost":2,"dbfId":54256,"flavor":"Were you expectorating another bad pun?","health":4,"id":"ULD_182","mechanics":["TRIGGER_VISUAL"],"name":"Spitting Camel","race":"BEAST","rarity":"COMMON","set":22,"text":"[x]At the end of your turn,\n  deal 1 damage to another  \nrandom friendly minion.","type":"MINION"}]
produce this output:
('Arthur Bozonnet',8,1,2,'2545','AT_003','Fallen Hero','RARE',1,'MINION',3,2,'NULL',0,0,'NULL','Your Hero Power deals 1\xa0extra damage.'),('Ivan Fomin',11,1,2,'54256','ULD_182','Spitting Camel','COMMON',22,'MINION',2,4,'BEAST',0,0,'NULL','[x]At the end of your turn,\n\xa0\xa0deal 1 damage to another\xa0\xa0\nrandom friendly minion.')
As you can see, some of the values from the JSON entries have been altered somehow as if the text encoding was changed somewhere, even though in my script I made sure that the JSON file was opened with utf-8 encoding and the resulting text file was also opened and written to in utf-8 to match the JSON file. My aim is to preserve the values exactly as they are in the JSON file and transfer those values to the generated SQL value entries exactly as they are in the JSON. As an example, in the generated SQL I want the "text" value of the second entry to be:
"[x]At the end of your turn,\n deal 1 damage to another \nrandom friendly minion."
instead of:
"[x]At the end of your turn,\n\xa0\xa0deal 1 damage to another\xa0\xa0\nrandom friendly minion."
I tried using functions such as unicodedata.normalize() but unfortunately it did not seem to change the output in any way.
This is the script that I wrote to generate the SQL values:
import json
import io
chosen_keys = ['artist','cardClass','collectible','cost',
'dbfId','id','name','rarity','set','type','attack','health',
'race','durability','armor',
'multiClassGroup','text']
defaults = ['NULL','0','0','0',
'NULL','NULL','NULL','NULL','0','NULL','0','0',
'NULL','0','0',
'NULL','NULL']
def saveChangesString(dataList, filename):
with io.open(filename, 'w', encoding='utf-8') as f:
f.write(dataList)
f.close()
def generateSQL(json_dict):
count = 0
endCount = 1
records = ""
finalState = ""
print('\n'+str(len(json_dict))+' records will be processed\n')
for i in json_dict:
entry = "("
jcount = 0
for j in chosen_keys:
if j in i.keys():
if str(i.get(j)).isdigit() and j != 'dbfId':
entry = entry + str(i.get(j))
else:
entry = entry + repr(str(i.get(j)))
else:
if str(defaults[jcount]).isdigit() and j != 'dbfId':
entry = entry + str(defaults[jcount])
else:
entry = entry + repr(str(defaults[jcount]))
if jcount != len(chosen_keys)-1:
entry = entry+","
jcount = jcount + 1
entry = entry + ")"
if count != len(json_dict)-1:
entry = entry+","
count = count + 1
if endCount % 100 == 0 and endCount >= 100 and endCount < len(json_dict):
print('processed records '+str(endCount - 99)+' - '+str(endCount))
if endCount + 100 > len(json_dict):
finalState = 'processed records '+str(endCount+1)+' - '+str(len(json_dict))
if endCount == len(json_dict):
print(finalState)
records = records + entry
endCount = endCount + 1
saveChangesString(records,'export.txt')
print('done')
with io.open('cards.collectible.sample.example.json', 'r', encoding='utf-8') as f:
json_to_dict = json.load(f)
f.close()
generateSQL(json_to_dict)
Any help would be greatly appreciated as the JSON file I am actually using contains over 2000 entries so I would prefer to avoid having to edit things manually. Thank you.
Also the SQL table structure code is:
-- phpMyAdmin SQL Dump
CREATE TABLE `card` (
`pid` int(10) NOT NULL,
`api_db_id` varchar(50) NOT NULL,
`dbfid` varchar(50) NOT NULL,
`name` varchar(50) NOT NULL,
`cardset_pid` int(10) NOT NULL,
`cardtype` varchar(50) NOT NULL,
`rarity` varchar(20) NOT NULL,
`cost` int(3) NOT NULL,
`attack` int(10) NOT NULL,
`health` int(10) NOT NULL,
`artist` varchar(50) NOT NULL,
`collectible` tinyint(1) NOT NULL,
`class_pid` int(10) NOT NULL,
`race` varchar(50) NOT NULL,
`durability` int(10) NOT NULL,
`armor` int(10) NOT NULL,
`multiclassgroup` varchar(50) NOT NULL,
`text` text NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;
ALTER TABLE `card`
ADD PRIMARY KEY (`pid`);
ALTER TABLE `card`
MODIFY `pid` int(10) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=1;
COMMIT;
\xa0 is a variant on space. Is it coming from Word?
But, more relevant, it is not utf8; it is latin1 or other non-utf8 encoding. You need to go back to where it came from and change that to utf8.
Or, if your next step is just to put it into a MySQL table, then you need to tell the truth about the client -- namely that it is encoded in latin1 (not utf8). Once you have done that, MySQL will take care of the conversion for you during the INSERT.

Mysql - update + insert json

I got a table that has a JSON field. The default value for the field is "NULL" - now I'd like to update a single field of the JSON data.
| ----------------- |
| [int] | [JSON] |
| xy | ipdata |
| ----------------- |
So the field could be something like this:
{ ip: "233.233.233.233", "data": "test", "name": "Peterson", "full_name": "Hanson Peterson" }
So I'd like to update the IP.
update table set ipdata = JSON_SET(ipdata, "$.ip", "newIp") where xy = 2;
But what happens if the field is NULL? The query above does not seems to "create" a new JSON with just the field IP. It just does nothing.
How can I tell mySql to insert the {"ip": "newIp"} if the field is empty and otherwise just update the ip json key?
You can use Case .. When to handle Null. When the field is null, you can instead create Json_object() and set it:
UPDATE table
SET ipdata = CASE WHEN ipdata IS NULL THEN JSON_OBJECT("ip", "newIp")
ELSE JSON_SET(ipdata, "$.ip", "newIp")
END
WHERE xy = "xy";

How to find if a function exists in PostgreSQL?

Unlike tables or sequences, user-defined functions cannot be found through pg_class. There are questions on how find a list of all functions to delete or grant them, but how to find an individual function (with known name and argument types) is not self-evident from them. So how to find whether a function exists or not?
EDIT: I want to use it in a function, in automated manner. Which solution is the best performance-wise? Trapping errors is quite expensive, so I guess the best solution for me would be something without the extra step of translating error to false, but I might be wrong in this assumption.
Yes, you cannot to find functions in pg_class because functions are stored on system table pg_proc
postgres-# \df
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+--------------------+------------------+----------------------+--------
public | foo | integer | a integer, b integer | normal
public | function_arguments | text | oid | normal
(2 rows)
Query for list of custom functions based on pg_proc is simply
postgres=# select p.oid::regprocedure
from pg_proc p
join pg_namespace n
on p.pronamespace = n.oid
where n.nspname not in ('pg_catalog', 'information_schema');
oid
-------------------------
foo(integer,integer)
function_arguments(oid)
(2 rows)
Most simply and fastest tests on functions existence are casting (without parameters) to regproc or regprocedure (with parameters):
postgres=# select 'foo'::regproc;
regproc
---------
foo
(1 row)
postgres=# select 'foox'::regproc;
ERROR: function "foox" does not exist
LINE 1: select 'foox'::regproc;
^
postgres=# select 'foo(int, int)'::regprocedure;
regprocedure
----------------------
foo(integer,integer)
(1 row)
postgres=# select 'foo(int, text)'::regprocedure;
ERROR: function "foo(int, text)" does not exist
LINE 1: select 'foo(int, text)'::regprocedure;
^
or you can do some similar with test against pg_proc
postgres=# select exists(select * from pg_proc where proname = 'foo');
exists
--------
t
(1 row)
postgres=# select exists(select *
from pg_proc
where proname = 'foo'
and function_arguments(oid) = 'integer, integer');
exists
--------
t
(1 row)
where:
CREATE OR REPLACE FUNCTION public.function_arguments(oid)
RETURNS text LANGUAGE sql AS $function$
select string_agg(par, ', ')
from (select format_type(unnest(proargtypes), null) par
from pg_proc where oid = $1) x
$function$
or you can use buildin functions:pg_get_function_arguments
p.s. trick for simply orientation in system catalog. Use a psql option -E:
[pavel#localhost ~]$ psql -E postgres
psql (9.2.8, server 9.5devel)
Type "help" for help.
postgres=# \df
********* QUERY **********
SELECT n.nspname as "Schema",
p.proname as "Name",
pg_catalog.pg_get_function_result(p.oid) as "Result data type",
pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types",
CASE
WHEN p.proisagg THEN 'agg'
WHEN p.proiswindow THEN 'window'
WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype THEN 'trigger'
ELSE 'normal'
END as "Type"
FROM pg_catalog.pg_proc p
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE pg_catalog.pg_function_is_visible(p.oid)
AND n.nspname <> 'pg_catalog'
AND n.nspname <> 'information_schema'
ORDER BY 1, 2, 4;
**************************
List of functions
Schema | Name | Result data type | Argument data types | Type
--------+--------------------+------------------+----------------------+--------
public | foo | integer | a integer, b integer | normal
public | function_arguments | text | oid | normal
(2 rows)
I think the easiest way would be to use pg_get_functiondef().
If it returns something, the function is there, otherwise the function does not exist:
select pg_get_functiondef('some_function()'::regprocedure);
select pg_get_functiondef('some_function(integer)'::regprocedure);
The drawback is that it will produce an error if the function isn't there instead of simply returning an empty result. But this could e.g. be overcome by writing a PL/pgSQL function that catches the exception and returns false instead.
Based on #PavelStehule answer this is how I am checking this in my scripts (using postgres exceptions and available exception codes)
DO $_$
BEGIN
BEGIN
SELECT 'some_schema.some_function(text)'::regprocedure;
EXCEPTION WHEN undefined_function THEN
-- do something here, i.e. create function
END;
END $_$;
Late to the party,
but it could be something like this (don't use select instead of perform if you are not using the result or you would get an error complaining about it :
ERROR: query has no destination for result data
So the following code will work :
DO $$
BEGIN
BEGIN
perform pg_get_functiondef('some_function()'::regprocedure);
raise notice 'it exists!';
EXCEPTION WHEN undefined_function THEN
raise notice 'Does not exist';
END;
END $$;

sqlalchemy FetchedValue and primary_key

I'm trying to create a table that uses a UUID_SHORT() as a primary key. I have a trigger that inserts a value when you do an insert. I'm having trouble making sqlalchemy recognize a column as a primary_key without complaining about not providing a default. If I do include a default value, it will use that default value even after flush despite declaring server_default=FetchedValue(). The only way I can seem to get things to work properly is if the column is not a primary key.
I'm using Pyramid, SQLAlchemy ORM, and MySQL.
Here's the model object:
Base = declarative_base()
class Patient(Base):
__tablename__ = 'patient'
patient_id = Column(BigInteger(unsigned=True), server_default=FetchedValue(), primary_key=True, autoincrement=False)
details = Column(Binary(10000))
in initializedb.py I have:
with transaction.manager:
patient1 = Patient(details = None)
DBSession.add(patient1)
DBSession.flush()
print(patient1.patient_id)
running ../bin/initialize_mainserver_db development.ini gives me the following error:
2012-11-01 20:17:22,168 INFO [sqlalchemy.engine.base.Engine][MainThread] BEGIN (implicit)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] INSERT INTO patient (details) VALUES (%(details)s)
2012-11-01 20:17:22,169 INFO [sqlalchemy.engine.base.Engine][MainThread] {'details': None}
2012-11-01 20:17:22,170 INFO [sqlalchemy.engine.base.Engine][MainThread] ROLLBACK
Traceback (most recent call last):
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/base.py", line 1691, in _execute_context
context)
File "/sites/metrics_dev/lib/python3.3/site-packages/sqlalchemy/engine/default.py", line 333, in do_execute
cursor.execute(statement, parameters)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 418, in execute
self._handle_result(self._connection.cmd_query(stmt))
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 345, in _handle_result
self._handle_noresultset(result)
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 321, in _handle_noresultset
self._warnings = self._fetch_warnings()
File "/sites/metrics_dev/lib/python3.3/site-packages/mysql/connector/cursor.py", line 608, in _fetch_warnings
raise errors.get_mysql_exception(res[0][1],res[0][2])
mysql.connector.errors.DatabaseError: 1364: Field 'patient_id' doesn't have a default value
Running a manual insert using the mysql client results in the everything working fine, so the problem seems to be with SQLAlchemy.
mysql> insert into patient(details) values (null);
Query OK, 1 row affected, 1 warning (0.00 sec)
mysql> select * from patient;
+-------------------+---------+
| patient_id | details |
+-------------------+---------+
| 94732327996882980 | NULL |
+-------------------+---------+
1 row in set (0.00 sec)
mysql> show triggers;
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| Trigger | Event | Table | Statement | Timing | Created | sql_mode | Definer | character_set_client | collation_connection | Database Collation |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
| before_insert_patient | INSERT | patient | SET new.`patient_id` = UUID_SHORT() | BEFORE | NULL | | root#localhost | utf8 | utf8_general_ci | latin1_swedish_ci |
+-----------------------+--------+---------+-------------------------------------+--------+---------+----------+----------------+----------------------+----------------------+--------------------+
1 row in set (0.00 sec)
Here's what I did as a work-around...
DBSession.execute(
"""CREATE TRIGGER before_insert_patient BEFORE INSERT ON `patient`
FOR EACH ROW BEGIN
IF (NEW.patient_id IS NULL OR NEW.patient_id = 0) THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END""")
and in the patient class:
patient_id = Column(BigInteger(unsigned=True), default=text("uuid_short()"), primary_key=True, autoincrement=False, server_default="0")
So, the trigger only does something if someone accesses the database directly and not through the python code. And hopefully no one does patient1 = Patient(patient_id=0, details = None) as SQLAlchemy will use the '0' value instead of what the trigger produces
For completeness, here are two additional possible solutions for your question (also available here), based on your answer. They are slightly simpler than your solution (omitting passing parameters with correct default values) and using SQLAlchemy constructs for defining the triggers.
#!/usr/bin/env python3
from sqlalchemy import BigInteger, Column, create_engine, DDL, event
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import FetchedValue
from sqlalchemy.sql.expression import func
Base = declarative_base()
class PatientOutputMixin(object):
'''
Mixin to output human readable representations of models.
'''
def __str__(self):
return '{}'.format(self.patient_id)
def __repr__(self):
return str(self)
class Patient1(Base, PatientOutputMixin):
'''
First version of ``Patient`` model.
'''
__tablename__ = 'patient_1'
patient_id = Column(BigInteger, primary_key=True,
default=func.uuid_short())
# the following trigger is only required if columns are inserted in the table
# not using the above model/table definition, otherwise it is redundant
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
IF NEW.patient_id IS NULL THEN
SET NEW.patient_id = UUID_SHORT();
END IF;
END
''')
event.listen(Patient1.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# end of optional trigger definition
class Patient2(Base, PatientOutputMixin):
'''
Second version of ``Patient`` model.
'''
__tablename__ = 'patient_2'
patient_id = Column(BigInteger, primary_key=True,
default=0, server_default=FetchedValue())
create_before_insert_trigger = DDL('''
CREATE TRIGGER before_insert_%(table)s BEFORE INSERT ON %(table)s
FOR EACH ROW BEGIN
SET NEW.patient_id = UUID_SHORT();
END
''')
event.listen(Patient2.__table__, 'after_create',
create_before_insert_trigger.execute_if(dialect='mysql'))
# test models
engine = create_engine('mysql+oursql://test:test#localhost/test?charset=utf8')
Base.metadata.bind = engine
Base.metadata.drop_all()
Base.metadata.create_all()
Session = sessionmaker(bind=engine)
session = Session()
for patient_model in [Patient1, Patient2]:
session.add(patient_model())
session.add(patient_model())
session.commit()
print('{} instances: {}'.format(patient_model.__name__,
session.query(patient_model).all()))
Running the above script produces the following (sample) output:
Patient1 instances: [22681783426351145, 22681783426351146]
Patient2 instances: [22681783426351147, 22681783426351148]

SQL query to remove certain text from each field in a specific column?

I recently recoded one of my sites, and the database structure is a little bit different.
I'm trying to convert the following:
*----*----------------------------*
| id | file_name |
*----*----------------------------*
| 1 | 1288044935741310953434.jpg |
*----*----------------------------*
| 2 | 1288044935741310352357.rar |
*----*----------------------------*
Into the following:
*----*----------------------------*
| id | file_name |
*----*----------------------------*
| 1 | 1288044935741310953434 |
*----*----------------------------*
| 2 | 1288044935741310352357 |
*----*----------------------------*
I know that I could do a foreach loop with PHP, and explode the file extension off the end, and update each row that way, but that seems like way too many queries for the task.
Is there any SQL query that I could run that would allow me to remove the file exentision from each field in the file_name column?
You can use the REPLACE() function in native MySQL to do a simple string replacement.
UPDATE tbl SET file_name = REPLACE(file_name, '.jpg', '');
UPDATE tbl SET file_name = REPLACE(file_name, '.rar', '');
This should work:
UPDATE MyTable
SET file_name = SUBSTRING(file_name,1, CHAR_LENGTH(file_name)-4)
This will strip off the final extension, if any, from file_name each time it is run. It is agnostic with respect to extension (so you can have ".foo" some day) and won't harm extensionless records.
UPDATE tbl
SET file_name = TRIM(TRAILING CONCAT('.', SUBSTRING_INDEX(file_name, '.', -1) FROM file_name);
You can use SUBSTRING_INDEX function
SUBSTRING_INDEX(str,delim,count)
Where str is the string, delim is the delimiter (from which you want a substring to the left or right of), and count specifies which delimiter (in the event there are multiple occurrences of the delimiter in the string)
Example:
UPDATE table SET file_name = SUBSTRING_INDEX(file_name , '.' , 1);