How to specify columns SQLAlchemy Insert object and from_select? - sqlalchemy

I'm using a SQLAlchemy insert object to quickly insert a bunch of data from another table. The schemas are as follow:
create table master (id serial, name varchar);
create table mapping (id serial, new_name varchar, master_id integer);
-- master_id is a foreign key back to the master table, id column
I populate my master table with unique names and IDs. I then want my mapping table to get seeded with data from this master table. The SQL would be
insert into mapping (master_id, new_name) select id, name from master;
I use the following SQLAlchemy statement. The problem I get is that SQLAlchemy can't seem to resolve the names because logically they are different between the two tables.
stmt = sa_mapping_table.insert().from_select(['name', 'id'], stmt)
Is there a way to tell the insert object, "using this select statement select these columns and put the results in these columns of the target table"?

I think you are close but you should specify columns of mapping to insert the select from master into. This should work where master_t and mapping_t are the sqlalchemy Table() objects.
master_t = Table('master', metadata,
Column('id', Integer, primary_key=True),
Column('name', String, nullable=False))
mapping_t = Table('mapping', metadata,
Column('id', Integer, primary_key=True),
Column('new_name', String, nullable=False),
Column('master_id', Integer, ForeignKey('master.id'), nullable=False))
#...
with engine.connect() as conn, conn.begin():
select_q = select(master_t.c.id, master_t.c.name)
stmt = mapping_t.insert().from_select(["master_id", "new_name"], select_q)
conn.execute(stmt)
Creates the following SQL:
INSERT INTO mapping (master_id, new_name) SELECT master.id, master.name
FROM master
See the docs at
insert-from-select

Related

ALTER TABLE to ADD COLUMN with Python variable in MySQL table after API response

I need to ALTER TABLE posts, ADD column product_id, and insert the product_id Python variable into my MySQL table after the API response. The output of the variable (which is returned/parsed from the response successfully) looks similar to this:
prod_ABc123
I'm new here and have tried everything I could find on the Internet (including ALTER TABLE, UPDATE, and any posts I could find, etc.). Baffled why it’s not working. Can anyone help please? Everything except the ALTER TABLE works:
try:
"""
Add user input to database
"""
conn = get_db_connection()
conn.execute('INSERT INTO posts (title, content, price) VALUES (?, ?, ?)',
(title, content, price))
conn.commit()
"""
Create product and pass product id and metadata price in create product response to create product price
"""
product = Product.create(name=title, description=content, metadata={'amount': price})
get_product_response = json.dumps(product)
load_product_response = json.loads(get_product_response)
product_id = load_product_response['id']
price = load_product_response['metadata']['amount']
Price.create(product=product_id, unit_amount=price, currency='usd')
print(f'The {product_id} product was added with a price of ${price}.')
"""
Add product id to database
"""
cur = conn.cursor()
prod_query = "ALTER TABLE posts ADD (product_id) TEXT NOT NULL, INSERT INTO posts VALUES (?)"
cur.execute(prod_query, product_id)
cur.commit()
The answers provided by the posters here were very helpful!
In relational world, ALTER TABLE is DDL command & INSERT is a DML command. You need to separate your commands by understanding the syntax.
General syntax:
--ALTER to Add column (changing table definition)
ALTER TABLE <tble_name> ADD COLUMN (<col_name> <data_type>);
--INSERT data/row
INSERT INTO <table_name> [(col1, col2, ...)] VALUES (val1, val2, ...);
A general advice, since DDL commands are usually one-time execution commands, better separate them out.
I think if you're using MySql server, the parameter marker is %s for mysql-connector while for Microsoft SQL, the parameter marker is ? with pyodbc connector. So please take note.
For the insert statement, here it is, based on all you have told me so far:
conn = (establish your database connection here)
cursor = conn.cursor()
sql = cursor.execute("INSERT INTO posts(title, content, price, product_id) VALUES(?,?,?,?)",
(yourtitleTextField.text(),
yourContentTextField.text(),
yourPriceTextField.text(),
yourProductIDtextfield.text() )
)
conn.commit()

postgres force json datatype

When working with JSON datatype, is there a way to ensure the input JSON must have elements. I don't mean primary, I want the JSON that gets inserted to at least have the id and name element, it can have more but at the minimum the id and name must be there.
thanks
The function checks what you want:
create or replace function json_has_id_and_name(val json)
returns boolean language sql as $$
select coalesce(
(
select array['id', 'name'] <# array_agg(key)
from json_object_keys(val) key
),
false)
$$;
select json_has_id_and_name('{"id":1, "name":"abc"}'), json_has_id_and_name('{"id":1}');
json_has_id_and_name | json_has_id_and_name
----------------------+----------------------
t | f
(1 row)
You can use it in a check constraint, e.g.:
create table my_table (
id int primary key,
jdata json check (json_has_id_and_name(jdata))
);
insert into my_table values (1, '{"id":1}');
ERROR: new row for relation "my_table" violates check constraint "my_table_jdata_check"
DETAIL: Failing row contains (1, {"id":1}).

Dynamic Partitioning + CREATE AS on HIVE

I'm trying to create a new table from another table with CREATE AS and dynamic Partitioning on HiveCLI. I'm learning from Hive official wiki where there is this example:
CREATE TABLE T (key int, value string)
PARTITIONED BY (ds string, hr int) AS
SELECT key, value, ds, hr+1 hr1
FROM srcpart
WHERE ds is not null
And hr>10;
But I received this error:
FAILED: SemanticException [Error 10065]:
CREATE TABLE AS SELECT command cannot specify the list of columns for the target table
Source: https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions#DynamicPartitions-Syntax
Since you already know the full schema of the target table, try creating it first and the populating it with a LOAD DATA command:
SET hive.exec.dynamic.partition.mode=nonstrict;
CREATE TABLE T (key int, value string)
PARTITIONED BY (ds string, hr int);
INSERT OVERWRITE TABLE T PARTITION(ds, hr)
SELECT key, value, ds, hr+1 AS hr
FROM srcpart
WHERE ds is not null
And hr>10;
Note: the set command is needed since you are performing a full dynamic partition insert.
SET hive.exec.dynamic.partition.mode=nonstrict;
CREATE TABLE T (key int, value string)
PARTITIONED BY (ds string, hr int);
INSERT OVERWRITE TABLE T PARTITION(ds, hr)
SELECT key, value, ds, hr+1 AS hr
FROM srcpart
WHERE ds is not null
And hr>10;
In the above code, instead of the Create statement use: CREATE TABLE T like srcpart;
In case the partitioning is similar.

How to Insert primary key in a table while using Play Framework, Ebean, sql server 2008

I want to insert a primary key in a table. How do I set the "Identity Insert" property ON from my Controller Class? I've tried this with no luck:
String sql = "SET IDENTITY_INSERT t_Student ON";
Connection conn = play.db.DB.getConnection();
try {
Statement stmt = conn.createStatement();
try {
stmt.execute(sql);
student.save();
} finally {
stmt.close();
}
} finally {
conn.close();
}
Here, I want to save the "student" object in to the DataBase, but getting the error: "Cannot insert explicit value for identity column in table 't_Student' when IDENTITY_INSERT is set to OFF." Thanks in advance.
One important thing.
IDENTITY_INSERT has nothing to primary key in database table. Moreover it has nothing to do with playframework in your situation. Primary key in shortcut is constraint that won't allow adding the same value to column(or columns). It's obvious. IDENTITY_INSERT mean that you take control explicitly of value for PK constraint. You're the boss.
For example:
CREATE TABLE student (id int PRIMARY KEY, name varchar(40))
This situation simulates SET IDENTITY_INSERT ON and is very rare in database schema. Nevertheless it's rare, you need to expliclitly put id and name. You must be aware that current id won't interfere with existing primary keys in table:
INSERT INTO student VALUES(1,'John')
For first time this code will work but when you try insert it again it will raise exception because you want add the same value as PK.
SET IDENTITY_INSERT OFF in other hand allow you to forget inserting id each time because database engine will do it for you. It will also keep PK constraint.
CREATE TABLE student (id int IDENTITY PRIMARY KEY, name varchar(40))
And if you want insert row into table you can do this:
INSERT INTO student('Johny Paul')
And if you will force ID:
INSERT INTO student VALUES(666,'Johnny Rambo')
You will have exception: An explicit value for the identity column in table 'student' can only be specified when a column list is used and IDENTITY_INSERT is ON. There are some situation when you need have full control of database values: dictionary tables across many environments or when your PK is corrupted and you need to "surgery" on PK. Otherwise don't use IDENTITY_INSERT ON. Perhaps under the hood EBean can't recognize whether insert statement should use id or not.
Answer
I strongly suggest using table with IDENTITY (IDENTITY_INSERT OFF). I have used Ebean only few times (I don't like this database layer framework) but I think that code should help:
#Entity
#Table(name="student")
public class Student {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
public Long id;
public String name;
}
Student student = new Student();
student.name = "Jonny JDK";
Ebean.save(student);
Or you can write raw SQL:
SqlUpdate insert = Ebean.createSqlUpdate("INSERT INTO student VALUES ('Jonny Bravo')");
insert.execute();

How to update column in one table from another if both tables have a text field in common

I have two tables Token and distinctToken. Following is the description of the two tables.
Token
(id int, text varchar(100), utokenid int)
distinctToken
(id int, text varchar(100))
The text field in both tables have same data with one exception, that is text field in Token table contains repeated entries.
I wanted to update the Token table such that the utokenid it becomes a foreign key. To be more specific i want to set the value of Token.utokenid = distinctToken.id where Token.text is the same as distinctToken.text. Is it possible using update or should i write a stored procedure to do so.
UPDATE Token t, distinctToken dt SET t.utokenid = dt.id WHERE t.text = dt.text;
Am I missing something?