SQLAlchemy is doing extra select when I'm trying to just remove a row
SQLAlchemy is doing extra select when I'm trying to just remove a row
TL;DR
I'm getting a timeout error on production:
OperationalError: (QueryCanceledError) canceling statement due to statement timeout CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."tableY" x WHERE $1 OPERATOR(pg_catalog.=) "tableX_id" FOR KEY SHARE OF x" 'DELETE FROM tableX WHERE tableX.id = %(id)s' {'id': 42}
and the main cause is SELECT 1 FROM ONLY "public"."tableY" x
, because there's no index by the field tableX_id. I'm trying to figure out where this query comes from, I don't need this check.
SELECT 1 FROM ONLY "public"."tableY" x
Full explaination
I have 2 tables, tableX and tableY, and relation defined in TableY in sqlalchemy as:
class TableY(Base):
...
tableX = relationship(
'TableX',
backref=backref(
'rows_y',
uselist=True,
lazy='dynamic',
),
uselist=False,
)
and in SQL as
create table if not exists tableY
(
...
tableX_id integer not null
constraint fk_tableX_id_tableY
references state_purchase
on update cascade on delete restrict,
)
I'm trying to remove a row from table
tableX_obj.delete()
SQLAlchemy is trying to delete also all related rows (with foreign key), so BEFORE doing the DELETE query it executes the
SELECT id FROM tableY where tableX_id=42
but tableY is a many-to-many relationship table, so it doesn't have an index on tableX_id field - which leads to TIMEOUT.
Creating index was not a good solution, because it will be useless: I'm sure that when I'm doing DELETE there will not be any related records, so I'll have pretty large index which will NOT contain any relevant information. It will contain just a garbage info.
So, I wanted the DB to handle this situation and added passive_deletes=True
passive_deletes=True
state_purchase = relationship(
'StatePurchase',
backref=backref(
'recommendations',
uselist=True,
lazy='dynamic',
passive_deletes=True,
),
uselist=False,
)
It seemed to solve an issue, BUT now I'm getting new timeout error on production:
OperationalError: (QueryCanceledError) canceling statement due to statement timeout CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."tableY" x WHERE $1 OPERATOR(pg_catalog.=) "tableX_id" FOR KEY SHARE OF x" 'DELETE FROM tableX WHERE tableX.id = %(id)s' {'id': 42}
What can execute this query? Does this comes from SQLAlchemy? If so, how can I disable it?
PostgreSQL 9.4
SQLAlchemy 0.9.8 (yes, I know :( )
1 Answer
1
The "extra" SELECT is done by Postgresql itself in order to enforce your foreign key constraint and its on delete restriction. Postgresql is checking if any row in table Y is referencing the to be deleted row in table X. You can easily reproduce this condition with some test tables and setting a ridiculously low statement timeout:
begin;
create table foo (
id serial primary key
);
create table bar (
foo_id int references foo (id) on delete restrict
);
insert into foo default values;
insert into foo default values;
insert into bar select 2 from generate_series(1, 1000001);
-- timeout of 5 ms
set statement_timeout = 5;
-- try and delete a row not referenced in bar, so scan
delete from foo where id = 1;
rollback;
and the result:
BEGIN
CREATE TABLE
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1000001
SET
psql:test.sql:18: ERROR: canceling statement due to statement timeout
CONTEXT: SQL statement "SELECT 1 FROM ONLY "public"."bar" x WHERE $1 OPERATOR(pg_catalog.=) "foo_id" FOR KEY SHARE OF x"
ROLLBACK
There are ways to disable foreign key checks, but you must know what you're doing lest you brake your referential integrity. Another option is to consider if you need the on delete restriction at all, or just create the index; you mention that table Y is an association table, so perhaps the column referencing table X id should be a part of its primary key. Though you're sure that there are no rows left in Y referencing X when deleting, the database cannot know that without checking.
By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.