API Reference
Models
class Model
(\*kwargs*)
Models provide a 1-to-1 mapping to database tables. Subclasses of Model
declare any number of Field
instances as class attributes. These fields correspond to columns on the table.
Table-level operations, such as select()
, update()
, insert()
, and delete()
, are implemented as classmethods. Row-level operations such as save()
and delete_instance()
are implemented as instancemethods.
Parameters: | kwargs – Initialize the model, assigning the given key/values to the appropriate fields. |
---|
Example:
class User(Model):
username = CharField()
join_date = DateTimeField(default=datetime.datetime.now)
is_admin = BooleanField()
u = User(username='charlie', is_admin=True)
classmethod
select
(\selection*)Parameters: selection – A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. Return type: a SelectQuery
for the givenModel
.Examples of selecting all columns (default):
User.select().where(User.active == True).order_by(User.username)
Example of selecting all columns on Tweet and the parent model, User. When the
user
foreign key is accessed on a Tweet instance no additional query will be needed (see N+1 for more details):(Tweet
.select(Tweet, User)
.join(User)
.order_by(Tweet.created_date.desc()))
classmethod
update
(\*update*)Parameters: update – mapping of field-name to expression Return type: an UpdateQuery
for the givenModel
Example showing users being marked inactive if their registration expired:
q = User.update(active=False).where(User.registration_expired == True)
q.execute() # Execute the query, updating the database.
Example showing an atomic update:
q = PageView.update(count=PageView.count + 1).where(PageView.url == url)
q.execute() # execute the query, updating the database.
Note
When an update query is executed, the number of rows modified will be returned.
classmethod
insert
(\*insert*)Insert a new row into the database. If any fields on the model have default values, these values will be used if the fields are not explicitly set in the
insert
dictionary.Parameters: insert – mapping of field or field-name to expression. Return type: an InsertQuery
for the givenModel
.Example showing creation of a new user:
q = User.insert(username='admin', active=True, registration_expired=False)
q.execute() # perform the insert.
You can also use
Field
objects as the keys:User.insert(**{User.username: 'admin'}).execute()
If you have a model with a default value on one of the fields, and that field is not specified in the
insert
parameter, the default will be used:class User(Model):
username = CharField()
active = BooleanField(default=True)
# This INSERT query will automatically specify `active=True`:
User.insert(username='charlie')
Note
When an insert query is executed on a table with an auto-incrementing primary key, the primary key of the new row will be returned.
insert_many
(rows)Insert multiple rows at once. The
rows
parameter must be an iterable that yields dictionaries. As withinsert()
, fields that are not specified in the dictionary will use their default value, if one exists.Note
Due to the nature of bulk inserts, each row must contain the same fields. The following will not work:
Person.insert_many([
{'first_name': 'Peewee', 'last_name': 'Herman'},
{'first_name': 'Huey'}, # Missing "last_name"!
])
Parameters: rows – An iterable containing dictionaries of field-name-to-value. Return type: an InsertQuery
for the givenModel
.Example of inserting multiple Users:
usernames = ['charlie', 'huey', 'peewee', 'mickey']
row_dicts = ({'username': username} for username in usernames)
# Insert 4 new rows.
User.insert_many(row_dicts).execute()
Because the
rows
parameter can be an arbitrary iterable, you can also use a generator:def get_usernames():
for username in ['charlie', 'huey', 'peewee']:
yield {'username': username}
User.insert_many(get_usernames()).execute()
Warning
If you are using SQLite, your SQLite library must be version 3.7.11 or newer to take advantage of bulk inserts.
Note
SQLite has a default limit of 999 bound variables per statement. This limit can be modified at compile-time or at run-time, but if modifying at run-time, you can only specify a lower value than the default limit.
For more information, check out the following SQLite documents:
classmethod
insert_from
(fields, query)Insert rows into the table using a query as the data source. This API should be used for INSERT INTO…SELECT FROM queries.
Parameters: - fields – The field objects to map the selected data into.
- query – The source of the new rows.
Return type: an
InsertQuery
for the givenModel
.Example of inserting data across tables for denormalization purposes:
source = (User
.select(User.username, fn.COUNT(Tweet.id))
.join(Tweet, JOIN.LEFT_OUTER)
.group_by(User.username))
UserTweetDenorm.insert_from(
[UserTweetDenorm.username, UserTweetDenorm.num_tweets],
source).execute()
classmethod
delete
()Return type: a DeleteQuery
for the givenModel
.Example showing the deletion of all inactive users:
q = User.delete().where(User.active == False)
q.execute() # remove the rows
Warning
This method performs a delete on the entire table. To delete a single instance, see
Model.delete_instance()
.classmethod
raw
(sql, \params*)Parameters: - sql – a string SQL expression
- params – any number of parameters to interpolate
Return type: a
RawQuery
for the givenModel
Example selecting rows from the User table:
q = User.raw('select id, username from users')
for user in q:
print user.id, user.username
Note
Generally the use of
raw
is reserved for those cases where you can significantly optimize a select query. It is useful for select queries since it will return instances of the model.classmethod
create
(\*attributes*)Parameters: attributes – key/value pairs of model attributes Return type: a model instance with the provided attributes Example showing the creation of a user (a row will be added to the database):
user = User.create(username='admin', password='test')
Note
The create() method is a shorthand for instantiate-then-save.
classmethod
get
(\args*)Parameters: args – a list of query expressions, e.g. User.username == ‘foo’
Return type: Model
instance or raisesDoesNotExist
exceptionGet a single row from the database that matches the given query. Raises a
<model-class>.DoesNotExist
if no rows are returned:user = User.get(User.username == username, User.active == True)
This method is also exposed via the
SelectQuery
, though it takes no parameters:active = User.select().where(User.active == True)
try:
user = active.where(
(User.username == username) &
(User.active == True)
).get()
except User.DoesNotExist:
user = None
Note
The
get()
method is shorthand for selecting with a limit of 1. It has the added behavior of raising an exception when no matching row is found. If more than one row is found, the first row returned by the database cursor will be used.classmethod
get_or_create
([defaults=None[, \*kwargs*]])Parameters: - defaults (dict) – A dictionary of values to set on newly-created model instances.
- kwargs – Django-style filters specifying which model to get, and what values to apply to new instances.
Returns: A 2-tuple containing the model instance and a boolean indicating whether the instance was created.
This function attempts to retrieve a model instance based on the provided filters. If no matching model can be found, a new model is created using the parameters specified by the filters and any values in the
defaults
dictionary.Note
Use care when calling
get_or_create
withautocommit=False
, as theget_or_create()
method will callDatabase.atomic()
to create either a transaction or savepoint.Example without
get_or_create
:# Without `get_or_create`, we might write:
try:
person = Person.get(
(Person.first_name == 'John') &
(Person.last_name == 'Lennon'))
except Person.DoesNotExist:
person = Person.create(
first_name='John',
last_name='Lennon',
birthday=datetime.date(1940, 10, 9))
Equivalent code using
get_or_create
:person, created = Person.get_or_create(
first_name='John',
last_name='Lennon',
defaults={'birthday': datetime.date(1940, 10, 9)})
classmethod
alias
()Return type: ModelAlias
instanceThe
alias()
method is used to create self-joins.Example:
Parent = Category.alias()
sq = (Category
.select(Category, Parent)
.join(Parent, on=(Category.parent == Parent.id))
.where(Parent.name == 'parent category'))
Note
When using a
ModelAlias
in a join, you must explicitly specify the join condition.classmethod
create_table
([fail_silently=False])Parameters: fail_silently (bool) – If set to True
, the method will check for the existence of the table before attempting to create.Create the table for the given model, along with any constraints and indexes.
Example:
database.connect()
SomeModel.create_table() # Execute the create table query.
classmethod
drop_table
([fail_silently=False[, cascade=False]])Parameters: - fail_silently (bool) – If set to
True
, the query will check for the existence of the table before attempting to remove. - cascade (bool) – Drop table with
CASCADE
option.
Drop the table for the given model.
- fail_silently (bool) – If set to
classmethod
table_exists
()Return type: Boolean whether the table for this model exists in the database classmethod
sqlall
()Returns: A list of queries required to create the table and indexes. save
([force_insert=False[, only=None]])Parameters: - force_insert (bool) – Whether to force execution of an insert
- only (list) – A list of fields to persist – when supplied, only the given fields will be persisted.
Save the given instance, creating or updating depending on whether it has a primary key. If
force_insert=True
an INSERT will be issued regardless of whether or not the primary key exists.Example showing saving a model instance:
user = User()
user.username = 'some-user' # does not touch the database
user.save() # change is persisted to the db
delete_instance
([recursive=False[, delete_nullable=False]])Parameters: - recursive – Delete this instance and anything that depends on it, optionally updating those that have nullable dependencies
- delete_nullable – If doing a recursive delete, delete all dependent objects regardless of whether it could be updated to NULL
Delete the given instance. Any foreign keys set to cascade on delete will be deleted automatically. For more programmatic control, you can call with recursive=True, which will delete any non-nullable related models (those that are nullable will be set to NULL). If you wish to delete all dependencies regardless of whether they are nullable, set
delete_nullable=True
.example:
some_obj.delete_instance() # it is gone forever
dependencies
([search_nullable=False])Parameters: search_nullable (bool) – Search models related via a nullable foreign key Return type: Generator expression yielding queries and foreign key fields Generate a list of queries of dependent models. Yields a 2-tuple containing the query and corresponding foreign key field. Useful for searching dependencies of a model, i.e. things that would be orphaned in the event of a delete.
dirty_fields
Return a list of fields that were manually set.
Return type: list Note
If you just want to persist modified fields, you can call
model.save(only=model.dirty_fields)
.If you always want to only save a model’s dirty fields, you can use the Meta option
only_save_dirty = True
. Then, any time you callModel.save()
, by default only the dirty fields will be saved, e.g.class Person(Model):
first_name = CharField()
last_name = CharField()
dob = DateField()
class Meta:
database = db
only_save_dirty = True
is_dirty
()Return whether any fields were manually set.
Return type: bool prepared
()This method provides a hook for performing model initialization after the row data has been populated.
Fields
Field(null=False, index=False, unique=False, verbose_name=None, help_text=None, db_column=None, default=None, choices=None, primary_key=False, sequence=None, constraints=None, schema=None, **kwargs):
The base class from which all other field types extend.
Parameters: |
|
---|
db_field = '<some field type>'
Attribute used to map this field to a column type, e.g. “string” or “datetime”
_is_bound
Boolean flag indicating if the field is attached to a model class.
model_class
The model the field belongs to. Only applies to bound fields.
name
The name of the field. Only applies to bound fields.
db_value
(value)Parameters: value – python data type to prep for storage in the database Return type: converted python datatype python_value
(value)Parameters: value – data coming from the backend storage Return type: python data type coerce
(value)This method is a shorthand that is used, by default, by both
db_value
andpython_value
. You can usually get away with just implementing this.Parameters: value – arbitrary data from app or backend Return type: python data type
class IntegerField
Stores: integers
db_field = 'int'
class BigIntegerField
Stores: big integers
db_field = 'bigint'
class PrimaryKeyField
Stores: auto-incrementing integer fields suitable for use as primary key.
db_field = 'primary_key'
class FloatField
Stores: floating-point numbers
db_field = 'float'
class DoubleField
Stores: double-precision floating-point numbers
db_field = 'double'
class DecimalField
Stores: decimal numbers, using python standard library Decimal
objects
Additional attributes and values:
max_digits | 10 |
decimal_places | 5 |
auto_round | False |
rounding | decimal.DefaultContext.rounding |
db_field = 'decimal'
class CharField
Stores: small strings (0-255 bytes)
Additional attributes and values:
max_length | 255 |
db_field = 'string'
class TextField
Stores: arbitrarily large strings
db_field = 'text'
class DateTimeField
Stores: python datetime.datetime
instances
Accepts a special parameter formats
, which contains a list of formats the datetime can be encoded with. The default behavior is:
'%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond
'%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second
'%Y-%m-%d' # year-month-day
Note
If the incoming value does not match a format, it will be returned as-is
db_field = 'datetime'
year
An expression suitable for extracting the year, for example to retrieve all blog posts from 2013:
Blog.select().where(Blog.pub_date.year == 2013)
month
An expression suitable for extracting the month from a stored date.
day
An expression suitable for extracting the day from a stored date.
hour
An expression suitable for extracting the hour from a stored time.
minute
An expression suitable for extracting the minute from a stored time.
second
An expression suitable for extracting the second from a stored time.
class DateField
Stores: python datetime.date
instances
Accepts a special parameter formats
, which contains a list of formats the date can be encoded with. The default behavior is:
'%Y-%m-%d' # year-month-day
'%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second
'%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond
Note
If the incoming value does not match a format, it will be returned as-is
db_field = 'date'
year
An expression suitable for extracting the year, for example to retrieve all people born in 1980:
Person.select().where(Person.dob.year == 1983)
month
Same as
year
, except extract month.day
Same as
year
, except extract day.
class TimeField
Stores: python datetime.time
instances
Accepts a special parameter formats
, which contains a list of formats the time can be encoded with. The default behavior is:
'%H:%M:%S.%f' # hour:minute:second.microsecond
'%H:%M:%S' # hour:minute:second
'%H:%M' # hour:minute
'%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond
'%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second
Note
If the incoming value does not match a format, it will be returned as-is
db_field = 'time'
hour
Extract the hour from a time, for example to retreive all events occurring in the evening:
Event.select().where(Event.time.hour > 17)
minute
Same as
hour
, except extract minute.second
Same as
hour
, except extract second..
class TimestampField
Stores: python datetime.datetime
instances (stored as integers)
Accepts a special parameter resolution
, which is a power-of-10 up to 10^6
. This allows sub-second precision while still using an IntegerField
for storage. Default is 1
(second precision).
Also accepts a boolean parameter utc
, used to indicate whether the timestamps should be UTC. Default is False
.
Finally, the field default
is the current timestamp. If you do not want this behavior, then explicitly pass in default=None
.
class BooleanField
Stores: True
/ False
db_field = 'bool'
class BlobField
Store arbitrary binary data.
class UUIDField
Store UUID
values.
Note
Currently this field is only supported by PostgresqlDatabase
.
class BareField
Intended to be used only with SQLite. Since data-types are not enforced, you can declare fields without any data-type. It is also common for SQLite virtual tables to use meta-columns or untyped columns, so for those cases as well you may wish to use an untyped field.
Accepts a special coerce
parameter, a function that takes a value coming from the database and converts it into the appropriate Python type.
Note
Currently this field is only supported by SqliteDatabase
.
class ForeignKeyField
(rel_model[, related_name=None[, on_delete=None[, on_update=None[, to_field=None[, …]]]]])
Stores: relationship to another model
Parameters: |
|
---|
class User(Model):
name = CharField()
class Tweet(Model):
user = ForeignKeyField(User, related_name='tweets')
content = TextField()
# "user" attribute
>>> some_tweet.user
<User: charlie>
# "tweets" related name attribute
>>> for tweet in charlie.tweets:
... print tweet.content
Some tweet
Another tweet
Yet another tweet
Note
Foreign keys do not have a particular db_field
as they will take their field type depending on the type of primary key on the model they are related to.
Note
If you manually specify a to_field
, that field must be either a primary key or have a unique constraint.
class CompositeKey
(\fields*)
Specify a composite primary key for a model. Unlike the other fields, a composite key is defined in the model’s Meta
class after the fields have been defined. It takes as parameters the string names of the fields to use as the primary key:
class BlogTagThrough(Model):
blog = ForeignKeyField(Blog, related_name='tags')
tag = ForeignKeyField(Tag, related_name='blogs')
class Meta:
primary_key = CompositeKey('blog', 'tag')
Query Types
class Query
The parent class from which all other query classes are derived. While you will not deal with Query
directly in your code, it implements some methods that are common across all query types.
where
(\expressions*)Parameters: expressions – a list of one or more expressions Return type: a Query
instanceExample selection users where the username is equal to ‘somebody’:
sq = SelectQuery(User).where(User.username == 'somebody')
Example selecting tweets made by users who are either editors or administrators:
sq = SelectQuery(Tweet).join(User).where(
(User.is_editor == True) |
(User.is_admin == True))
Example of deleting tweets by users who are no longer active:
dq = DeleteQuery(Tweet).where(
Tweet.user << User.select().where(User.active == False))
dq.execute() # perform the delete query
Note
where()
calls are chainable. Multiple calls will be “AND”-ed together.join
(model, join_type=None, on=None)Parameters: - model – the model to join on. there must be a
ForeignKeyField
between the currentquery context
and the model passed in. - join_type – allows the type of
JOIN
used to be specified explicitly, one ofJOIN.INNER
,JOIN.LEFT_OUTER
,JOIN.FULL
,JOIN.RIGHT_OUTER
, orJOIN.CROSS
. - on – if multiple foreign keys exist between two models, this parameter is the ForeignKeyField to join on.
Return type: a
Query
instanceGenerate a
JOIN
clause from the currentquery context
to themodel
passed in, and establishesmodel
as the newquery context
.Example selecting tweets and joining on user in order to restrict to only those tweets made by “admin” users:
sq = SelectQuery(Tweet).join(User).where(User.is_admin == True)
Example selecting users and joining on a particular foreign key field. See the example app for a real-life usage:
sq = SelectQuery(User).join(Relationship, on=Relationship.to_user)
- model – the model to join on. there must be a
switch
(model)Parameters: model – model to switch the query context
to.Return type: a clone of the query with a new query context Switches the
query context
to the given model. Raises an exception if the model has not been selected or joined on previously. Useful for performing multiple joins from a single table.The following example selects from blog and joins on both entry and user:
sq = SelectQuery(Blog).join(Entry).switch(Blog).join(User)
alias
(alias=None)Parameters: alias (str) – A string to alias the result of this query Return type: a Query instance Assign an alias to given query, which can be used as part of a subquery.
sql
()Return type: a 2-tuple containing the appropriate SQL query and a tuple of parameters execute
()Execute the given query
scalar
([as_tuple=False[, convert=False]])Parameters: - as_tuple (bool) – return the row as a tuple or a single value
- convert (bool) – attempt to coerce the selected value to the appropriate data-type based on it’s associated Field type (assuming one exists).
Return type: the resulting row, either as a single value or tuple
Provide a way to retrieve single values from select queries, for instance when performing an aggregation.
>>> PageView.select(fn.Count(fn.Distinct(PageView.url))).scalar()
100 # <-- there are 100 distinct URLs in the pageview table
This example illustrates the use of the convert argument. When using a SQLite database, datetimes are stored as strings. To select the max datetime, and have it returned as a datetime, we will specify
convert=True
.>>> PageView.select(fn.MAX(PageView.timestamp)).scalar()
'2016-04-20 13:37:00.1234'
>>> PageView.select(fn.MAX(PageView.timestamp)).scalar(convert=True)
datetime.datetime(2016, 4, 20, 13, 37, 0, 1234)
class SelectQuery
(model_class, \selection*)
By far the most complex of the query classes available in peewee. It supports all clauses commonly associated with select queries.
Methods on the select query can be chained together.
SelectQuery
implements an __iter__()
method, allowing it to be iterated to return model instances.
Parameters: |
|
---|
If no selection is provided, it will default to all the fields of the given model.
Example selecting some user instances from the database. Only the id
and username
columns are selected. When iterated, will return instances of the User
model:
sq = SelectQuery(User, User.id, User.username)
for user in sq:
print user.username
Example selecting users and additionally the number of tweets made by the user. The User
instances returned will have an additional attribute, ‘count’, that corresponds to the number of tweets made:
sq = (SelectQuery(
User, User, fn.Count(Tweet.id).alias('count'))
.join(Tweet)
.group_by(User))
select
(\selection*)Parameters: selection – a list of expressions, which can be model classes or fields. if left blank, will default to all the fields of the given model. Return type: SelectQuery
Note
Usually the selection will be specified when the instance is created. This method simply exists for the case when you want to modify the SELECT clause independent of instantiating a query.
query = User.select()
query = query.select(User.username)
from_
(\args*)Parameters: args – one or more expressions, for example Model
orSelectQuery
instance(s). if left blank, will default to the table of the given model.Return type: SelectQuery
# rather than a join, select from both tables and join with where.
query = User.select().from_(User, Blog).where(Blog.user == User.id)
group_by
(\clauses*)Parameters: clauses – a list of expressions, which can be model classes or individual field instances Return type: SelectQuery
Group by one or more columns. If a model class is provided, all the fields on that model class will be used.
Example selecting users, joining on tweets, and grouping by the user so a count of tweets can be calculated for each user:
sq = (User
.select(User, fn.Count(Tweet.id).alias('count'))
.join(Tweet)
.group_by(User))
having
(\expressions*)Parameters: expressions – a list of one or more expressions Return type: SelectQuery
Here is the above example selecting users and tweet counts, but restricting the results to those users who have created 100 or more tweets:
sq = (User
.select(User, fn.Count(Tweet.id).alias('count'))
.join(Tweet)
.group_by(User)
.having(fn.Count(Tweet.id) > 100))
order_by
(\clauses[, extend=False*])Parameters: - clauses – a list of fields, calls to
field.asc|desc
or one or more expressions. If called without any arguments, any pre-existingORDER BY
clause will be removed. - extend – When called with
extend=True
, Peewee will append any to the pre-existingORDER BY
rather than overwriting it.
Return type: Example of ordering users by username:
User.select().order_by(User.username)
Example of selecting tweets and ordering them first by user, then newest first:
query = (Tweet
.select()
.join(User)
.order_by(
User.username,
Tweet.created_date.desc()))
You can also use
+
and-
prefixes to indicate ascending or descending order if you prefer:query = (Tweet
.select()
.join(User)
.order_by(
+User.username,
-Tweet.created_date))
A more complex example ordering users by the number of tweets made (greatest to least), then ordered by username in the event of a tie:
tweet_ct = fn.Count(Tweet.id)
sq = (User
.select(User, tweet_ct.alias('count'))
.join(Tweet)
.group_by(User)
.order_by(tweet_ct.desc(), User.username))
Example of removing a pre-existing
ORDER BY
clause:# Query will be ordered by username.
users = User.select().order_by(User.username)
# Query will be returned in whatever order database chooses.
unordered_users = users.order_by()
- clauses – a list of fields, calls to
window
(\windows*)Parameters: windows (Window) – One or more Window
instances.Add one or more window definitions to this query.
window = Window(partition_by=[fn.date_trunc('day', PageView.timestamp)])
query = (PageView
.select(
PageView.url,
PageView.timestamp,
fn.Count(PageView.id).over(window=window))
.window(window)
.order_by(PageView.timestamp))
limit
(num)Parameters: num (int) – limit results to num
rowsoffset
(num)Parameters: num (int) – offset results by num
rowspaginate
(page_num, paginate_by=20)Parameters: - page_num – a 1-based page number to use for paginating results
- paginate_by – number of results to return per-page
Return type: Shorthand for applying a
LIMIT
andOFFSET
to the query.Page indices are 1-based, so page 1 is the first page.
User.select().order_by(User.username).paginate(3, 20) # get users 41-60
distinct
([is_distinct=True])Parameters: is_distinct – See notes. Return type: SelectQuery
Indicates that this query should only return distinct rows. Results in a
SELECT DISTINCT
query.Note
The value for
is_distinct
should either be a boolean, in which case the query will (or won’t) be DISTINCT.You can specify a list of one or more expressions to generate a
DISTINCT ON
query, e.g..distinct([Model.col1, Model.col2])
.for_update
([for_update=True[, nowait=False]])Return type: SelectQuery
Indicate that this query should lock rows for update. If
nowait
isTrue
then the database will raise anOperationalError
if it cannot obtain the lock.with_lock
([lock_type=’UPDATE’])Return type: SelectQuery
Indicates that this query shoudl lock rows. A more generic version of the
for_update()
method.Example:
# SELECT * FROM some_model FOR KEY SHARE NOWAIT;
SomeModel.select().with_lock('KEY SHARE NOWAIT')
Note
You do not need to include the word FOR.
naive
()Return type: SelectQuery
Flag this query indicating it should only attempt to reconstruct a single model instance for every row returned by the cursor. If multiple tables were queried, the columns returned are patched directly onto the single model instance.
Generally this method is useful for speeding up the time needed to construct model instances given a database cursor.
Note
this can provide a significant speed improvement when doing simple iteration over a large result set.
iterator
()Return type: iterable
By default peewee will cache rows returned by the cursor. This is to prevent things like multiple iterations, slicing and indexing from triggering extra queries. When you are iterating over a large number of rows, however, this cache can take up a lot of memory. Using
iterator()
will save memory by not storing all the returned model instances.# iterate over large number of rows.
for obj in Stats.select().iterator():
# do something.
pass
tuples
()Return type: SelectQuery
Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when you either do not want or do not need full model instances.
dicts
()Return type: SelectQuery
Flag this query indicating it should simply return dictionaries from the cursor. This method is useful when you either do not want or do not need full model instances.
aggregate_rows
()Return type: SelectQuery
This method provides one way to avoid the N+1 query problem.
Consider a webpage where you wish to display a list of users and all of their associated tweets. You could approach this problem by listing the users, then for each user executing a separate query to retrieve their tweets. This is the N+1 behavior, because the number of queries varies depending on the number of users. Conventional wisdom is that it is preferable to execute fewer queries. Peewee provides several ways to avoid this problem.
You can use the
prefetch()
helper, which usesIN
clauses to retrieve the tweets for the listed users.Another method is to select both the user and the tweet data in a single query, then de-dupe the users, aggregating the tweets in the process.
The raw column data might appear like this:
# user.id, user.username, tweet.id, tweet.user_id, tweet.message
[1, 'charlie', 1, 1, 'hello'],
[1, 'charlie', 2, 1, 'goodbye'],
[2, 'no-tweets', NULL, NULL, NULL],
[3, 'huey', 3, 3, 'meow'],
[3, 'huey', 4, 3, 'purr'],
[3, 'huey', 5, 3, 'hiss'],
We can infer from the
JOIN
clause that the user data will be duplicated, and therefore by de-duping the users, we can collect their tweets in one go and iterate over the users and tweets transparently.query = (User
.select(User, Tweet)
.join(Tweet, JOIN.LEFT_OUTER)
.order_by(User.username, Tweet.id)
.aggregate_rows()) # .aggregate_rows() tells peewee to de-dupe the rows.
for user in query:
print user.username
for tweet in user.tweets:
print ' ', tweet.message
# Producing the following output:
charlie
hello
goodbye
huey
meow
purr
hiss
no-tweets
Warning
Be sure that you specify an
ORDER BY
clause that ensures duplicated data will appear in consecutive rows.Note
You can specify arbitrarily complex joins, though for more complex queries it may be more efficient to use
prefetch()
. In short, try both and see what works best for your data-set.Note
For more information, see the Avoiding N+1 queries document and the Using aggregate_rows sub-section.
annotate
(related_model, aggregation=None)Parameters: - related_model – related
Model
on which to perform aggregation, must be linked byForeignKeyField
. - aggregation – the type of aggregation to use, e.g.
fn.Count(Tweet.id).alias(‘count’)
Return type: Annotate a query with an aggregation performed on a related model, for example, “get a list of users with the number of tweets for each”:
>>> User.select().annotate(Tweet)
If
aggregation
is None, it will default tofn.Count(related_model.id).alias('count')
but can be anything:>>> user_latest = User.select().annotate(Tweet, fn.Max(Tweet.created_date).alias('latest'))
Note
If the
ForeignKeyField
isnullable
, then aLEFT OUTER
join may need to be used:query = (User
.select()
.join(Tweet, JOIN.LEFT_OUTER)
.switch(User) # Switch query context back to `User`.
.annotate(Tweet))
- related_model – related
aggregate
(aggregation)Parameters: aggregation – a function specifying what aggregation to perform, for example fn.Max(Tweet.created_date)
.Method to look at an aggregate of rows using a given function and return a scalar value, such as the count of all rows or the average value of a particular column.
count
([clear_limit=False])Parameters: clear_limit (bool) – Remove any limit or offset clauses from the query before counting. Return type: an integer representing the number of rows in the current query Note
If the query has a GROUP BY, DISTINCT, LIMIT, or OFFSET clause, then the
wrapped_count()
method will be used instead.>>> sq = SelectQuery(Tweet)
>>> sq.count()
45 # number of tweets
>>> deleted_tweets = sq.where(Tweet.status == DELETED)
>>> deleted_tweets.count()
3 # number of tweets that are marked as deleted
wrapped_count
([clear_limit=False])Parameters: clear_limit (bool) – Remove any limit or offset clauses from the query before counting. Return type: an integer representing the number of rows in the current query Wrap the count query in a subquery. Additional overhead but will give correct counts when performing
DISTINCT
queries or those withGROUP BY
clauses.Note
count()
will automatically default towrapped_count()
in the event the query is distinct or has a grouping.exists
()Return type: boolean whether the current query will return any rows. uses an optimized lookup, so use this rather than get()
.sq = User.select().where(User.active == True)
if sq.where(User.username == username, User.active == True).exists():
authenticated = True
get
()Return type: Model
instance or raisesDoesNotExist
exceptionGet a single row from the database that matches the given query. Raises a
<model-class>.DoesNotExist
if no rows are returned:active = User.select().where(User.active == True)
try:
user = active.where(User.username == username).get()
except User.DoesNotExist:
user = None
This method is also exposed via the
Model
api, in which case it accepts arguments that are translated to the where clause:user = User.get(User.active == True, User.username == username)
first
([n=1])Parameters: n (int) – Return the first n query results after applying a limit of n
records.Return type: Model
instance, list orNone
if no resultsFetch the first n rows from a query. Behind-the-scenes, a
LIMIT n
is applied. The results of the query are then cached on the query result wrapper so subsequent calls tofirst()
will not cause multiple queries.If only one row is requested (default behavior), then the return-type will be either a model instance or
None
.If multiple rows are requested, the return type will either be a list of one to n model instances, or
None
if no results are found.peek
([n=1])Parameters: n (int) – Return the first n query results. Return type: Model
instance, list orNone
if no resultsFetch the first n rows from a query. No
LIMIT
is applied to the query, so thepeek()
has slightly different semantics fromfirst()
, which ensures no more than n rows are requested. Thepeek
method, on the other hand, retains the ability to fetch the entire result set withouth issuing additional queries.execute
()Return type: QueryResultWrapper
Executes the query and returns a
QueryResultWrapper
for iterating over the result set. The results are managed internally by the query and whenever a clause is added that would possibly alter the result set, the query is marked for re-execution.__iter__
()Executes the query and returns populated model instances:
for user in User.select().where(User.active == True):
print user.username
__len__
()Return the number of items in the result set of this query. If all you need is the count of items and do not intend to do anything with the results, call
count()
.Warning
The
SELECT
query will be executed and the result set will be loaded. If you want to obtain the number of results without also loading the query, usecount()
.__getitem__
(value)Parameters: value – Either an index or a slice
object.Return the model instance(s) at the requested indices. To get the first model, for instance:
query = User.select().order_by(User.username)
first_user = query[0]
first_five = query[:5]
__or__
(rhs)Parameters: rhs – Either a SelectQuery
or aCompoundSelect
Return type: CompoundSelect
Create a
UNION
query with the right-hand object. The result will contain all values from both the left and right queries.customers = Customer.select(Customer.city).where(Customer.state == 'KS')
stores = Store.select(Store.city).where(Store.state == 'KS')
# Get all cities in kansas where we have either a customer or a store.
all_cities = (customers | stores).order_by(SQL('city'))
Note
SQLite does not allow
ORDER BY
orLIMIT
clauses on the components of a compound query, however SQLite does allow these clauses on the final, compound result. This applies toUNION (ALL)
,INTERSECT
, andEXCEPT
.__and__
(rhs)Parameters: rhs – Either a SelectQuery
or aCompoundSelect
Return type: CompoundSelect
Create an
INTERSECT
query. The result will contain values that are in both the left and right queries.customers = Customer.select(Customer.city).where(Customer.state == 'KS')
stores = Store.select(Store.city).where(Store.state == 'KS')
# Get all cities in kanasas where we have both customers and stores.
cities = (customers & stores).order_by(SQL('city'))
__sub__
(rhs)Parameters: rhs – Either a SelectQuery
or aCompoundSelect
Return type: CompoundSelect
Create an
EXCEPT
query. The result will contain values that are in the left-hand query but not in the right-hand query.customers = Customer.select(Customer.city).where(Customer.state == 'KS')
stores = Store.select(Store.city).where(Store.state == 'KS')
# Get all cities in kanasas where we have customers but no stores.
cities = (customers - stores).order_by(SQL('city'))
__xor__
(rhs)Parameters: rhs – Either a SelectQuery
or aCompoundSelect
Return type: CompoundSelect
Create an symmetric difference query. The result will contain values that are in either the left-hand query or the right-hand query, but not both.
customers = Customer.select(Customer.city).where(Customer.state == 'KS')
stores = Store.select(Store.city).where(Store.state == 'KS')
# Get all cities in kanasas where we have either customers with no
# store, or a store with no customers.
cities = (customers ^ stores).order_by(SQL('city'))
class UpdateQuery
(model_class, \*kwargs*)
Parameters: |
|
---|
Example in which users are marked inactive if their registration expired:
uq = UpdateQuery(User, active=False).where(User.registration_expired == True)
uq.execute() # Perform the actual update
Example of an atomic update:
atomic_update = UpdateQuery(PageCount, count = PageCount.count + 1).where(
PageCount.url == url)
atomic_update.execute() # will perform the actual update
execute
()Return type: Number of rows updated Performs the query
returning
(\returning*)Parameters: returning – A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in None
.Return type: a UpdateQuery
for the givenModel
.Add a
RETURNING
clause to the query, which will cause theUPDATE
to compute return values based on each row that was actually updated.When the query is executed, rather than returning the number of rows updated, an iterator will be returned that yields the updated objects.
Note
Currently only
PostgresqlDatabase
supports this feature.Example:
# Disable all users whose registration expired, and return the user
# objects that were updated.
query = (User
.update(active=False)
.where(User.registration_expired == True)
.returning(User))
# We can iterate over the users that were updated.
for updated_user in query.execute():
send_activation_email(updated_user.email)
For more information, check out the RETURNING clause docs.
tuples
()Return type: UpdateQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the updated results are returned, they will be returned as row tuples.
dicts
()Return type: UpdateQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the updated results are returned, they will be returned as dictionaries mapping column to value.
on_conflict
([action=None])Add a SQL
ON CONFLICT
clause with the specified action to the givenUPDATE
query. Valid actions are:- ROLLBACK
- ABORT
- FAIL
- IGNORE
- REPLACE
Specifying
None
for the action will execute a normalUPDATE
query.Note
This feature is only available on SQLite databases.
class InsertQuery
(model_class[, field_dict=None[, rows=None[, fields=None[, query=None[, validate_fields=False]]]]])
Creates an InsertQuery
instance for the given model.
Parameters: |
|
---|
Basic example:
>>> fields = {'username': 'admin', 'password': 'test', 'active': True}
>>> iq = InsertQuery(User, fields)
>>> iq.execute() # insert new row and return primary key
2L
Example inserting multiple rows:
users = [
{'username': 'charlie', 'active': True},
{'username': 'peewee', 'active': False},
{'username': 'huey', 'active': True}]
iq = InsertQuery(User, rows=users)
iq.execute()
Example inserting using a query as the data source:
query = (User
.select(User.username, fn.COUNT(Tweet.id))
.join(Tweet, JOIN.LEFT_OUTER)
.group_by(User.username))
iq = InsertQuery(
UserTweetDenorm,
fields=[UserTweetDenorm.username, UserTweetDenorm.num_tweets],
query=query)
iq.execute()
execute
()Return type: primary key of the new row Performs the query
upsert
([upsert=True])Perform an INSERT OR REPLACE query with SQLite. MySQL databases will issue a REPLACE query. Currently this feature is not supported for Postgres databases, but the 9.5 syntax will be added soon.
Note
This feature is only available on SQLite and MySQL databases.
on_conflict
([action=None])Add a SQL
ON CONFLICT
clause with the specified action to the givenINSERT
query. SpecifyingREPLACE
is equivalent to using theupsert()
method. Valid actions are:- ROLLBACK
- ABORT
- FAIL
- IGNORE
- REPLACE
Specifying
None
for the action will execute a normalINSERT
query.Note
This feature is only available on SQLite databases.
return_id_list
([return_id_list=True])By default, when doing bulk INSERTs, peewee will not return the list of generated primary keys. However, if the database supports returning primary keys via
INSERT ... RETURNING
, this method instructs peewee to return the generated list of IDs.Note
Currently only PostgreSQL supports this behavior. While other databases support bulk inserts, they will simply return
True
instead.Example:
usernames = [
{'username': username}
for username in ['charlie', 'huey', 'mickey']]
query = User.insert_many(usernames).return_id_list()
user_ids = query.execute()
print user_ids
# prints something like [1, 2, 3]
returning
(\returning*)Parameters: returning – A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in None
.Return type: a InsertQuery
for the givenModel
.Add a
RETURNING
clause to the query, which will cause theINSERT
to compute return values based on each row that was inserted.When the query is executed, rather than returning the primary key of the new row(s), an iterator will be returned that yields the inserted objects.
Note
Currently only
PostgresqlDatabase
supports this feature.Example:
# Create some users, retrieving the list of IDs assigned to them.
query = (User
.insert_many(list_of_user_data)
.returning(User))
# We can iterate over the users that were created.
for new_user in query.execute():
# Do something with the new user's ID...
do_something(new_user.id)
For more information, check out the RETURNING clause docs.
tuples
()Return type: InsertQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the inserted results are returned, they will be returned as row tuples.
dicts
()Return type: InsertQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the inserted results are returned, they will be returned as dictionaries mapping column to value.
class DeleteQuery
(model_class)
Creates a DELETE query for the given model.
Note
DeleteQuery will not traverse foreign keys or ensure that constraints are obeyed, so use it with care.
Example deleting users whose account is inactive:
dq = DeleteQuery(User).where(User.active == False)
execute
()Return type: Number of rows deleted Performs the query
returning
(\returning*)Parameters: returning – A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in None
.Return type: a DeleteQuery
for the givenModel
.Add a
RETURNING
clause to the query, which will cause theDELETE
to compute return values based on each row that was removed from the database.When the query is executed, rather than returning the number of rows deleted, an iterator will be returned that yields the deleted objects.
Note
Currently only
PostgresqlDatabase
supports this feature.Example:
# Create some users, retrieving the list of IDs assigned to them.
query = (User
.delete()
.where(User.account_expired == True)
.returning(User))
# We can iterate over the user objects that were deleted.
for deleted_user in query.execute():
# Do something with the deleted user.
notify_account_deleted(deleted_user.email)
For more information, check out the RETURNING clause docs.
tuples
()Return type: DeleteQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the deleted results are returned, they will be returned as row tuples.
dicts
()Return type: DeleteQuery
Note
This method should only be used in conjunction with a call to
returning()
.When the deleted results are returned, they will be returned as dictionaries mapping column to value.
class RawQuery
(model_class, sql, \params*)
Allows execution of an arbitrary query and returns instances of the model via a QueryResultsWrapper
.
Note
Generally you will only need this for executing highly optimized SELECT queries.
Warning
If you are executing a parameterized query, you must use the correct interpolation string for your database. SQLite uses '?'
and most others use '%s'
.
Example selecting users with a given username:
>>> rq = RawQuery(User, 'SELECT * FROM users WHERE username = ?', 'admin')
>>> for obj in rq.execute():
... print obj
<User: admin>
tuples
()Return type: RawQuery
Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when you either do not want or do not need full model instances.
dicts
()Return type: RawQuery
Flag this query indicating it should simply return raw dicts from the cursor. This method is useful when you either do not want or do not need full model instances.
execute
()Return type: a QueryResultWrapper
for iterating over the result set. The results are instances of the given model.Performs the query
class CompoundSelect
(model_class, lhs, operator, rhs)
Compound select query.
Parameters: |
|
---|
prefetch
(sq, \subqueries*)
Parameters: |
|
---|---|
Return type: |
|
Pre-fetch the appropriate instances from the subqueries and apply them to their corresponding parent row in the outer query. This function will eagerly load the related instances specified in the subqueries. This is a technique used to save doing O(n) queries for n rows, and rather is O(k) queries for k subqueries.
For example, consider you have a list of users and want to display all their tweets:
# let's impost some small restrictions on our queries
users = User.select().where(User.active == True)
tweets = Tweet.select().where(Tweet.published == True)
# this will perform 2 queries
users_pf = prefetch(users, tweets)
# now we can:
for user in users_pf:
print user.username
for tweet in user.tweets_prefetch:
print '- ', tweet.content
You can prefetch an arbitrary number of items. For instance, suppose we have a photo site, User -> Photo -> (Comments, Tags). That is, users can post photos, and these photos can have tags and comments on them. If we wanted to fetch a list of users, all their photos, and all the comments and tags on the photos:
users = User.select()
published_photos = Photo.select().where(Photo.published == True)
published_comments = Comment.select().where(
(Comment.is_spam == False) &
(Comment.num_flags < 3))
# note that we are just passing the Tag model -- it will be converted
# to a query automatically
users_pf = prefetch(users, published_photos, published_comments, Tag)
# now we can iterate users, photos, and comments/tags
for user in users_pf:
for photo in user.photo_set_prefetch:
for comment in photo.comment_set_prefetch:
# ...
for tag in photo.tag_set_prefetch:
# ...
Note
Subqueries must be related by foreign key and can be arbitrarily deep
Note
For more information, see the Avoiding N+1 queries document and the Using prefetch sub-section.
Warning
prefetch()
can use up lots of RAM when the result set is large, and will not warn you if you are doing something dangerous, so it is up to you to know when to use it. Additionally, because of the semantics of subquerying, there may be some cases when prefetch does not act as you expect (for instance, when applying a LIMIT
to subqueries, but there may be others) – please report anything you think is a bug to github.
Database and its subclasses
class Database
(database[, threadlocals=True[, autocommit=True[, fields=None[, ops=None[, autorollback=False[, use_speedups=True[, \*connect_kwargs*]]]]]]])
Parameters: |
|
---|
The connect_kwargs
dictionary is used for vendor-specific parameters that will be passed back directly to your database driver, allowing you to specify the user
, host
and password
, for instance. For more information and examples, see the vendor-specific parameters document.
Note
If your database name is not known when the class is declared, you can pass None
in as the database name which will mark the database as “deferred” and any attempt to connect while in this state will raise an exception. To initialize your database, call the Database.init()
method with the database name.
For an in-depth discussion of run-time database configuration, see the Run-time database configuration section.
A high-level API for working with the supported database engines. The database class:
- Manages the underlying database connection.
- Executes queries.
- Manage transactions and savepoints.
- Create and drop tables and indexes.
Introspect the database.
commit_select = False
Whether to issue a commit after executing a select query. With some engines can prevent implicit transactions from piling up.
compiler_class = QueryCompiler
A class suitable for compiling queries
compound_operations = ['UNION', 'INTERSECT', 'EXCEPT']
Supported compound query operations.
compound_select_parentheses = False
Whether
UNION
(or other compoundSELECT
queries) allow parentheses around the queries.distinct_on = False
Whether the database supports
DISTINCT ON
statements.drop_cascade = False
Whether the database supports cascading drop table queries.
field_overrides = {}
A mapping of field types to database column types, e.g.
{'primary_key': 'SERIAL'}
foreign_keys = True
Whether the given backend enforces foreign key constraints.
for_update = False
Whether the given backend supports selecting rows for update
for_update_nowait = False
Whether the given backend supports selecting rows for update
insert_many = True
Whether the database supports multiple
VALUES
clauses forINSERT
queries.insert_returning = False
Whether the database supports returning the primary key for newly inserted rows.
interpolation = '?'
The string used by the driver to interpolate query parameters
op_overrides = {}
A mapping of operation codes to string operations, e.g.
{OP.LIKE: 'LIKE BINARY'}
quote_char = '"'
The string used by the driver to quote names
reserved_tables = []
Table names that are reserved by the backend – if encountered in the application a warning will be issued.
returning_clause = False
Whether the database supports
RETURNING
clauses forUPDATE
,INSERT
andDELETE
queries.Note
Currently only
PostgresqlDatabase
supports this.See the following for more information:
savepoints = True
Whether the given backend supports savepoints.
sequences = False
Whether the given backend supports sequences
subquery_delete_same_table = True
Whether the given backend supports deleting rows using a subquery that selects from the same table
window_functions = False
Whether the given backend supports window functions.
init
(database[, \*connect_kwargs*])This method is used to initialize a deferred database. For details on configuring your database at run-time, see the Run-time database configuration section.
Parameters: - database – the name of the database (or filename if using sqlite)
- connect_kwargs – any arbitrary parameters to pass to the database driver when connecting
connect
()Establishes a connection to the database
Note
By default, connections will be stored on a threadlocal, ensuring connections are not shared across threads. To disable this behavior, initialize the database with
threadlocals=False
.close
()Closes the connection to the database (if one is open)
Note
If you initialized with
threadlocals=True
, only a connection local to the calling thread will be closed.initialize_connection
(conn)Perform additional intialization on a newly-opened connection. For example, if you are using SQLite you may want to enable foreign key constraint enforcement (off by default).
Here is how you might use this hook to load a SQLite extension:
class CustomSqliteDatabase(SqliteDatabase):
def initialize_connection(self, conn):
conn.load_extension('fts5')
get_conn
()Return type: a connection to the database, creates one if does not exist get_cursor
()Return type: a cursor for executing queries last_insert_id
(cursor, model)Parameters: - cursor – the database cursor used to perform the insert query
- model – the model class that was just created
Return type: the primary key of the most recently inserted instance
rows_affected
(cursor)Return type: number of rows affected by the last query compiler
()Return type: an instance of QueryCompiler
using the field and op overrides specified.execute
(clause)Parameters: clause (Node) – a Node
instance or subclass (e.g. aSelectQuery
).The clause will be compiled into SQL then sent to the
execute_sql()
method.execute_sql
(sql[, params=None[, require_commit=True]])Parameters: - sql – a string sql query
- params – a list or tuple of parameters to interpolate
Note
You can configure whether queries will automatically commit by using the
set_autocommit()
andDatabase.get_autocommit()
methods.begin
([lock_type=None])Initiate a new transaction. By default not implemented as this is not part of the DB-API 2.0, but provided for API compatibility and to allow SQLite users to specify the isolation level when beginning transactions.
For SQLite users, the valid isolation levels for
lock_type
are:exclusive
immediate
deferred
Example usage:
# Calling transaction() in turn calls begin('exclusive').
with db.transaction('exclusive'):
# No other readers or writers allowed while this is active.
(Account
.update(Account.balance=Account.balance - 100)
.where(Account.id == from_acct)
.execute())
(Account
.update(Account.balance=Account.balance + 100)
.where(Account.id == to_acct)
.execute())
commit
()Call
commit()
on the active connection, committing the current transaction.rollback
()Call
rollback()
on the active connection, rolling back the current transaction.set_autocommit
(autocommit)Parameters: autocommit – a boolean value indicating whether to turn on/off autocommit. get_autocommit
()Return type: a boolean value indicating whether autocommit is enabled. get_tables
([schema=None])Return type: a list of table names in the database. get_indexes
(table[, schema=None])Return type: a list of IndexMetadata
instances, representing the indexes for the given table.get_columns
(table[, schema=None])Return type: a list of ColumnMetadata
instances, representing the columns for the given table.get_primary_keys
(table[, schema=None])Return type: a list containing the primary key column name(s) for the given table. get_foreign_keys
(table[, schema=None])Return type: a list of ForeignKeyMetadata
instances, representing the foreign keys for the given table.sequence_exists
(sequence_name)Rtype boolean: create_table
(model_class[, safe=True])Parameters: - model_class –
Model
class. - safe (bool) – If True, the table will not be created if it already exists.
Warning
Unlike
Model.create_table()
, this method does not create indexes or constraints. This method will only create the table itself. If you wish to create the table along with any indexes and constraints, use eitherModel.create_table()
orDatabase.create_tables()
.- model_class –
create_index
(model_class, fields[, unique=False])Parameters: - model_class –
Model
table on which to create index - fields – field(s) to create index on (either field instances or field names)
- unique – whether the index should enforce uniqueness
- model_class –
create_foreign_key
(model_class, field[, constraint=None])Parameters: - model_class –
Model
table on which to create foreign key constraint - field –
Field
object - constraint (str) – Name to give foreign key constraint.
Manually create a foreign key constraint using an
ALTER TABLE
query. This is primarily used when creating a circular foreign key dependency, for example:DeferredPost = DeferredRelation()
class User(Model):
username = CharField()
favorite_post = ForeignKeyField(DeferredPost, null=True)
class Post(Model):
title = CharField()
author = ForeignKeyField(User, related_name='posts')
DeferredPost.set_model(Post)
# Create tables. The foreign key from Post -> User will be created
# automatically, but the foreign key from User -> Post must be added
# manually.
User.create_table()
Post.create_table()
# Manually add the foreign key constraint on `User`, since we could
# not add it until we had created the `Post` table.
db.create_foreign_key(User, User.favorite_post)
- model_class –
create_sequence
(sequence_name)Parameters: sequence_name – name of sequence to create Note
only works with database engines that support sequences
drop_table
(model_class[, fail_silently=False[, cascade=False]])Parameters: - model_class –
Model
table to drop - fail_silently (bool) – if
True
, query will add aIF EXISTS
clause - cascade (bool) – drop table with
CASCADE
option.
- model_class –
drop_sequence
(sequence_name)Parameters: sequence_name – name of sequence to drop Note
only works with database engines that support sequences
create_tables
(models[, safe=False])Parameters: - models (list) – A list of models.
- safe (bool) – Check first whether the table exists before attempting to create it.
This method should be used for creating tables as it will resolve the model dependency graph and ensure the tables are created in the correct order. This method will also create any indexes and constraints defined on the models.
Usage:
db.create_tables([User, Tweet, Something], safe=True)
drop_tables
(models[, safe=False[, cascade=False]])Parameters: - models (list) – A list of models.
- safe (bool) – Check the table exists before attempting to drop it.
- cascade (bool) – drop table with
CASCADE
option.
This method should be used for dropping tables, as it will resolve the model dependency graph and ensure the tables are dropped in the correct order.
Usage:
db.drop_tables([User, Tweet, Something], safe=True)
atomic
([transaction_type=None])Execute statements in either a transaction or a savepoint. The outer-most call to atomic will use a transaction, and any subsequent nested calls will use savepoints.
Parameters: transaction_type (str) – Specify isolation level. This parameter only has effect on SQLite databases, and furthermore, only affects the outer-most call to atomic()
. For more information, seetransaction()
.atomic
can be used as either a context manager or a decorator.Note
For most use-cases, it makes the most sense to always use
atomic()
when you wish to execute queries in a transaction. The benefit of usingatomic
is that you do not need to manually keep track of the transaction stack depth, as this will be managed for you.Context manager example code:
with db.atomic() as txn:
perform_some_operations()
with db.atomic() as nested_txn:
do_other_things()
if something_bad_happened():
# Roll back these changes, but preserve the changes
# made in the outer block.
nested_txn.rollback()
Decorator example code:
@db.atomic()
def create_user(username):
# This function will execute in a transaction/savepoint.
return User.create(username=username)
transaction
([transaction_type=None])Execute statements in a transaction using either a context manager or decorator. If an error is raised inside the wrapped block, the transaction will be rolled back, otherwise statements are committed when exiting. Transactions can also be explicitly rolled back or committed within the transaction block by calling
rollback()
orcommit()
. If you manually commit or roll back, a new transaction will be started automatically.Nested blocks can be wrapped with
transaction
- the database will keep a stack and only commit when it reaches the end of the outermost function / block.Parameters: transaction_type (str) – Specify isolation level, SQLite only. Context manager example code:
```
delete a blog instance and all its associated entries, but
do so within a transaction
with database.transaction():
blog.delete_instance(recursive=True)
# Explicitly roll back a transaction.
with database.transaction() as txn:
do_some_stuff()
if something_bad_happened():
# Roll back any changes made within this block.
txn.rollback()
```
Decorator example code:
```
@database.transaction()
def transfer_money(from_acct, to_acct, amt):
from_acct.charge(amt)
to_acct.pay(amt)
return amt
```
SQLite users can specify the isolation level by specifying one of the following values for `transaction_type`:
- `exclusive`
- `immediate`
- `deferred`
Example usage:
```
with db.transaction('exclusive'):
# No other readers or writers allowed while this is active.
(Account
.update(Account.balance=Account.balance - 100)
.where(Account.id == from_acct)
.execute())
(Account
.update(Account.balance=Account.balance + 100)
.where(Account.id == to_acct)
.execute())
```
commit_on_success
(func)Note
Use
atomic()
ortransaction()
instead.savepoint
([sid=None])Execute statements in a savepoint using either a context manager or decorator. If an error is raised inside the wrapped block, the savepoint will be rolled back, otherwise statements are committed when exiting. Like
transaction()
, a savepoint can also be explicitly rolled-back or committed by callingrollback()
orcommit()
. If you manually commit or roll back, a new savepoint will not be created.Savepoints can be thought of as nested transactions.
Parameters: sid (str) – An optional string identifier for the savepoint. Context manager example code:
with db.transaction() as txn:
do_some_stuff()
with db.savepoint() as sp1:
do_more_things()
with db.savepoint() as sp2:
even_more()
# Oops, something bad happened, roll back
# just the changes made in this block.
if something_bad_happened():
sp2.rollback()
execution_context
([with_transaction=True])Create an
ExecutionContext
context manager or decorator. Blocks wrapped with an ExecutionContext will run using their own connection. By default, the wrapped block will also run in a transaction, although this can be disabled specifyinwith_transaction=False
.For more explanation of
ExecutionContext
, see the Advanced Connection Management section.Warning
ExecutionContext is very new and has not been tested extensively.
classmethod
register_fields
(fields)Register a mapping of field overrides for the database class. Used to register custom fields or override the defaults.
Parameters: fields (dict) – A mapping of db_field
to column typeclassmethod
register_ops
(ops)Register a mapping of operations understood by the QueryCompiler to their SQL equivalent, e.g.
{OP.EQ: '='}
. Used to extend the types of field comparisons.Parameters: fields (dict) – A mapping of db_field
to column typeextract_date
(date_part, date_field)Return an expression suitable for extracting a date part from a date field. For instance, extract the year from a
DateTimeField
.Parameters: - date_part (str) – The date part attribute to retrieve. Valid options are: “year”, “month”, “day”, “hour”, “minute” and “second”.
- date_field (Field) – field instance storing a datetime, date or time.
Return type: an expression object.
truncate_date
(date_part, date_field)Return an expression suitable for truncating a date / datetime to the given resolution. This can be used, for example, to group a collection of timestamps by day.
Parameters: - date_part (str) – The date part to truncate to. Valid options are: “year”, “month”, “day”, “hour”, “minute” and “second”.
- date_field (Field) – field instance storing a datetime, date or time.
Return type: an expression object.
Example:
# Get tweets from today.
tweets = Tweet.select().where(
db.truncate_date('day', Tweet.timestamp) == datetime.date.today())
class SqliteDatabase
(Database)
Database
subclass that works with the sqlite3
driver (or pysqlite2
). In addition to the default database parameters, SqliteDatabase
also accepts a journal_mode parameter which will configure the journaling mode.
Note
If you have both sqlite3
and pysqlite2
installed on your system, peewee will use whichever points at a newer version of SQLite.
Note
SQLite is unique among the databases supported by Peewee in that it allows a high degree of customization by the host application. This means you can do things like write custom functions or aggregates in Python and then call them from your SQL queries. This feature, and many more, are available through the SqliteExtDatabase
, part of playhouse.sqlite_ext
. I strongly recommend you use SqliteExtDatabase
as it exposes many of the features that make SQLite so powerful.
Custom parameters:
Parameters: |
|
---|
SQLite allows run-time configuration of a number of parameters through PRAGMA
statements (documentation). These statements are typically run against a new database connection. To run one or more PRAGMA
statements against new connections, you can specify them as a list of 2-tuples containing the pragma name and value:
db = SqliteDatabase('my_app.db', pragmas=(
('journal_mode', 'WAL'),
('cache_size', 10000),
('mmap_size', 1024 * 1024 * 32),
))
insert_many = True *if* using SQLite 3.7.11.0 or newer.
class MySQLDatabase
(Database)
Database
subclass that works with either “MySQLdb” or “pymysql”.
commit_select = True
compound_operations = ['UNION']
for_update = True
subquery_delete_same_table = False
class PostgresqlDatabase
(Database)
Database
subclass that works with the “psycopg2” driver
commit_select = True
compound_select_parentheses = True
distinct_on = True
for_update = True
for_update_nowait = True
insert_returning = True
returning_clause = True
sequences = True
window_functions = True
register_unicode = True
Control whether the
UNICODE
andUNICODEARRAY
psycopg2 extensions are loaded automatically.
Transaction, Savepoint and ExecutionContext
The easiest way to create transactions and savepoints is to use Database.atomic()
. The atomic()
method will create a transaction or savepoint depending on the level of nesting.
with db.atomic() as txn:
# The outer-most call will be a transaction.
with db.atomic() as sp:
# Nested calls will be savepoints instead.
execute_some_statements()
class transaction
(database)
Context manager that encapsulates a database transaction. Statements executed within the wrapped block will be committed at the end of the block unless an exception occurs, in which case any changes will be rolled back.
Warning
Transactions should not be nested as this could lead to unpredictable behavior in the event of an exception in a nested block. If you wish to use nested transactions, use the atomic()
method, which will create a transaction at the outer-most layer and use savepoints for nested blocks.
Note
In practice you should not create transaction
objects directly, but rather use the Database.transaction()
method.
commit
()Manually commit any pending changes and begin a new transaction.
rollback
()Manually roll-back any pending changes and begin a new transaction.
class savepoint
(database[, sid=None])
Context manager that encapsulates a savepoint (nested transaction). Statements executed within the wrapped block will be committed at the end of the block unless an exception occurs, in which case any changes will be rolled back.
Warning
Savepoints must be created within a transaction. It is recommended that you use atomic()
instead of manually managing the transaction+savepoint stack.
Note
In practice you should not create savepoint
objects directly, but rather use the Database.savepoint()
method.
commit
()Manually commit any pending changes. If the savepoint is manually committed and additional changes are made, they will be executed in the context of the outer block.
rollback
()Manually roll-back any pending changes. If the savepoint is manually rolled-back and additional changes are made, they will be executed in the context of the outer block.
class ExecutionContext
(database[, with_transaction=True])
ExecutionContext provides a way to explicitly run statements in a dedicated connection. Typically a single database connection is maintained per-thread, but in some situations you may wish to explicitly force a new, separate connection. To accomplish this, you can create an ExecutionContext
. Statements executed in the wrapped block will be run in a transaction by default, though you can disable this by specifying with_transaction=False
.
Note
Rather than instantiating ExecutionContext
directly, use Database.execution_context()
.
Example code:
# This will return the connection associated with the current thread.
conn = db.get_conn()
with db.execution_context():
# This will be a new connection object. If you are using the
# connection pool, it may be an unused connection from the pool.
ctx_conn = db.get_conn()
# This statement is executed using the new `ctx_conn`.
User.create(username='huey')
# At the end of the wrapped block, the connection will be closed and the
# transaction, if one exists, will be committed.
# This statement is executed using the regular `conn`.
User.create(username='mickey')
class Using
(database, models[, with_transaction=True])
For the duration of the wrapped block, all queries against the given models
will use the specified database
. Optionally these queries can be run outside a transaction by specifying with_transaction=False
.
Using
provides, in short, a way to run queries on a list of models using a manually specified database.
Parameters: |
---|
Warning
The Using
context manager does not do anything to manage the database connections, so it the user’s responsibility to make sure that you close the database explicitly.
Example:
master = PostgresqlDatabase('master')
replica = PostgresqlDatabase('replica')
class Data(Model):
value = IntegerField()
class Meta:
database = master
# All these queries use the "master" database,
# since that is what our Data model was configured
# to use.
for i in range(10):
Data.create(value=i)
Data.insert_many({Data.value: j} for j in range(100, 200)).execute()
# To use the read replica, we can use the Using context manager.
with Using(read_replica, [Data]):
# Query is executed against the read replica.
n_data = Data.select().count()
# Since we did not specify this model in the list passed
# to Using, it will use whatever database it was defined with.
other_count = SomeOtherModel.select().count()
Metadata Types
class IndexMetadata
(name, sql, columns, unique, table)
name
The name of the index.
sql
The SQL query used to generate the index.
columns
A list of columns that are covered by the index.
unique
A boolean value indicating whether the index has a unique constraint.
table
The name of the table containing this index.
class ColumnMetadata
(name, data_type, null, primary_key, table)
name
The name of the column.
data_type
The data type of the column
null
A boolean value indicating whether
NULL
is permitted in this column.primary_key
A boolean value indicating whether this column is a primary key.
table
The name of the table containing this column.
class ForeignKeyMetadata
(column, dest_table, dest_column, table)
column
The column containing the foreign key (the “source”).
dest_table
The table referenced by the foreign key.
dest_column
The column referenced by the foreign key (on
dest_table
).table
The name of the table containing this foreign key.
Misc
class fn
A helper class that will convert arbitrary function calls to SQL function calls.
To express functions in peewee, use the fn
object. The way it works is anything to the right of the “dot” operator will be treated as a function. You can pass that function arbitrary parameters which can be other valid expressions.
For example:
Peewee expression | Equivalent SQL |
---|---|
fn.Count(Tweet.id).alias(‘count’) | Count(t1.”id”) AS count |
fn.Lower(fn.Substr(User.username, 1, 1)) | Lower(Substr(t1.”username”, 1, 1)) |
fn.Rand().alias(‘random’) | Rand() AS random |
fn.Stddev(Employee.salary).alias(‘sdv’) | Stddev(t1.”salary”) AS sdv |
over
([partition_by=None[, order_by=None[, start=None[, end=None[, window=None]]]]])Basic support for SQL window functions.
Parameters: Examples:
# Get the list of employees and the average salary for their dept.
query = (Employee
.select(
Employee.name,
Employee.department,
Employee.salary,
fn.Avg(Employee.salary).over(
partition_by=[Employee.department]))
.order_by(Employee.name))
# Rank employees by salary.
query = (Employee
.select(
Employee.name,
Employee.salary,
fn.rank().over(
order_by=[Employee.salary])))
# Get a list of page-views, along with avg pageviews for that day.
query = (PageView
.select(
PageView.url,
PageView.timestamp,
fn.Count(PageView.id).over(
partition_by=[fn.date_trunc(
'day', PageView.timestamp)]))
.order_by(PageView.timestamp))
# Same as above but using a window class.
window = Window(partition_by=[fn.date_trunc('day', PageView.timestamp)])
query = (PageView
.select(
PageView.url,
PageView.timestamp,
fn.Count(PageView.id).over(window=window))
.window(window) # Need to include our Window here.
.order_by(PageView.timestamp))
# Get the list of times along with the last time.
query = (Times
.select(
Times.time,
fn.LAST_VALUE(Times.time).over(
order_by=[Times.time],
start=Window.preceding(),
end=Window.following())))
class SQL
(sql, \params*)
Add fragments of SQL to a peewee query. For example you might want to reference an aliased name.
Parameters: |
|
---|
# Retrieve user table and "annotate" it with a count of tweets for each
# user.
query = (User
.select(User, fn.Count(Tweet.id).alias('ct'))
.join(Tweet, JOIN.LEFT_OUTER)
.group_by(User))
# Sort the users by number of tweets.
query = query.order_by(SQL('ct DESC'))
class Window
([partition_by=None[, order_by=None[, start=None[, end=None]]]])
Create a WINDOW
definition.
Parameters: |
---|
Examples:
# Get the list of employees and the average salary for their dept.
window = Window(partition_by=[Employee.department]).alias('dept_w')
query = (Employee
.select(
Employee.name,
Employee.department,
Employee.salary,
fn.Avg(Employee.salary).over(window))
.window(window)
.order_by(Employee.name))
static
preceding
([value=None])Return an expression appropriate for passing in to the
start
orend
clause of aWindow
object. Ifvalue
is not provided, then it will beUNBOUNDED PRECEDING
.static
following
([value=None])Return an expression appropriate for passing in to the
start
orend
clause of aWindow
object. Ifvalue
is not provided, then it will beUNBOUNDED FOLLOWING
.
class DeferredRelation
Used to reference a not-yet-created model class. Stands in as a placeholder for the related model of a foreign key. Useful for circular references.
DeferredPost = DeferredRelation()
class User(Model):
username = CharField()
# `Post` is not available yet, it is declared below.
favorite_post = ForeignKeyField(DeferredPost, null=True)
class Post(Model):
# `Post` comes after `User` since it refers to `User`.
user = ForeignKeyField(User)
title = CharField()
DeferredPost.set_model(Post) # Post is now available.
set_model
(model)Replace the placeholder with the correct model class.
class Proxy
Proxy class useful for situations when you wish to defer the initialization of an object. For instance, you want to define your models but you do not know what database engine you will be using until runtime.
Example:
database_proxy = Proxy() # Create a proxy for our db.
class BaseModel(Model):
class Meta:
database = database_proxy # Use proxy for our DB.
class User(BaseModel):
username = CharField()
# Based on configuration, use a different database.
if app.config['DEBUG']:
database = SqliteDatabase('local.db')
elif app.config['TESTING']:
database = SqliteDatabase(':memory:')
else:
database = PostgresqlDatabase('mega_production_db')
# Configure our proxy to use the db we specified in config.
database_proxy.initialize(database)
initialize
(obj)Parameters: obj – The object to proxy to. Once initialized, the attributes and methods on
obj
can be accessed directly via theProxy
instance.
class Node
The Node
class is the parent class for all composable parts of a query, and forms the basis of peewee’s expression API. The following classes extend Node
:
SelectQuery
,UpdateQuery
,InsertQuery
,DeleteQuery
, andRawQuery
.Field
Func
(andfn()
)SQL
Expression
Param
Window
Clause
Entity
Check
Overridden operators:
- Bitwise and- and or- (
&
and|
): combine multiple nodes using the given conjunction. +
,-
,*
,/
and^
(add, subtract, multiply, divide and exclusive-or).==
,!=
,<
,<=
,>
,>=
: create a binary expression using the given comparator.<<
: create an IN expression.>>
: create an IS expression.%
and**
: LIKE and ILIKE.contains
(rhs)Create a binary expression using case-insensitive string search.
startswith
(rhs)Create a binary expression using case-insensitive prefix search.
endswith
(rhs)Create a binary expression using case-insensitive suffix search.
between
(low, high)Create an expression that will match values between
low
andhigh
.regexp
(expression)Match based on regular expression.
concat
(rhs)Concatenate the current node with the provided
rhs
.Warning
In order for this method to work with MySQL, the MySQL session must be set to use
PIPES_AS_CONCAT
.To reliably concatenate strings with MySQL, use
fn.CONCAT(s1, s2...)
instead.is_null
([is_null=True])Create an expression testing whether the
Node
is (or is not)NULL
.# Find all categories whose parent column is NULL.
root_nodes = Category.select().where(Category.parent.is_null())
# Find all categores whose parent is NOT NULL.
child_nodes = Category.select().where(Category.parent.is_null(False))
To simplify things, peewee will generate the correct SQL for equality and inequality. The
is_null()
method is provided simply for readability.# Equivalent to the previous queries -- peewee will translate these
# into `IS NULL` and `IS NOT NULL`:
root_nodes = Category.select().where(Category.parent == None)
child_nodes = Category.select().where(Category.parent != None)
__invert__
()Negate the node. This translates roughly into NOT (<node>).
alias
([name=None])Apply an alias to the given node. This translates into <node> AS <name>.
asc
()Apply ascending ordering to the given node. This translates into <node> ASC.
desc
()Apply descending ordering to the given node. This translates into <node> DESC.
bind_to
(model_class)Bind the results of an expression to a specific model type. Useful when adding expressions to a select, where the result of the expression should be placed on a particular joined instance.
classmethod
extend
([name=None[, clone=False]])Decorator for adding the decorated function as a new method on
Node
and its subclasses. Useful for adding implementation-specific features to all node types.Parameters: - name (str) – Method name. If not provided the name of the wrapped function will be used.
- clone (bool) – Whether this method should return a clone. This is generally true when the method mutates the internal state of the node.
Example:
# Add a `cast()` method to all nodes using the '::' operator.
PostgresqlDatabase.register_ops({'::', '::'})
@Node.extend()
def cast(self, as_type):
return Expression(self, '::', SQL(as_type))
# Let's pretend we want to find all data points whose numbers
# are palindromes. Note that we can use the new *cast* method
# on both fields and with the `fn` helper:
reverse_val = fn.REVERSE(DataModel.value.cast('str')).cast('int')
query = (DataPoint
.select()
.where(DataPoint.value == reverse_val))
Note
To remove an extended method, simply call
delattr
on the class the method was originally added to.