- Microsoft SQL Server
- DBAPI Support
- Auto Increment Behavior / IDENTITY Columns
- MAX on VARCHAR / NVARCHAR
- Collation Support
- LIMIT/OFFSET Support
- Transaction Isolation Level
- Nullability
- Date / Time Handling
- Large Text/Binary Type Deprecation
- Multipart Schema Names
- Legacy Schema Mode
- Clustered Index Support
- MSSQL-Specific Index Options
- Compatibility Levels
- Triggers
- Rowcount Support / ORM Versioning
- Enabling Snapshot Isolation
- SQL Server SQL Constructs
- SQL Server Data Types
- PyODBC
- mxODBC
- pymssql
- zxjdbc
- AdoDBAPI
Microsoft SQL Server
Support for the Microsoft SQL Server database.
DBAPI Support
The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.
Auto Increment Behavior / IDENTITY Columns
SQL Server provides so-called “auto incrementing” behavior using theIDENTITY
construct, which can be placed on any single integer column in atable. SQLAlchemy considers IDENTITY
within its default “autoincrement”behavior for an integer primary key column, described atColumn.autoincrement
. This means that by default, the firstinteger primary key column in a Table
will be considered to be theidentity column and will generate DDL as such:
- from sqlalchemy import Table, MetaData, Column, Integer
- m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True),
- Column('x', Integer))
- m.create_all(engine)
The above example will generate DDL as:
- CREATE TABLE t (
- id INTEGER NOT NULL IDENTITY(1,1),
- x INTEGER NULL,
- PRIMARY KEY (id)
- )
For the case where this default generation of IDENTITY
is not desired,specify False
for the Column.autoincrement
flag,on the first integer primary key column:
- m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True, autoincrement=False),
- Column('x', Integer))
- m.create_all(engine)
To add the IDENTITY
keyword to a non-primary key column, specifyTrue
for the Column.autoincrement
flag on the desiredColumn
object, and ensure that Column.autoincrement
is set to False
on any integer primary key column:
- m = MetaData()
- t = Table('t', m,
- Column('id', Integer, primary_key=True, autoincrement=False),
- Column('x', Integer, autoincrement=True))
- m.create_all(engine)
Changed in version 1.3: Added mssql_identity_start
andmssql_identity_increment
parameters to Column
. These replacethe use of the Sequence
object in order to specify these values.
Deprecated since version 1.3: The use of Sequence
to specify IDENTITY characteristics isdeprecated and will be removed in a future release. Please usethe mssql_identity_start
and mssql_identity_increment
parametersdocumented at Auto Increment Behavior / IDENTITY Columns.
Note
There can only be one IDENTITY column on the table. When usingautoincrement=True
to enable the IDENTITY keyword, SQLAlchemy does notguard against multiple columns specifying the option simultaneously. TheSQL Server database will instead reject the CREATE TABLE
statement.
Note
An INSERT statement which attempts to provide a value for a column that ismarked with IDENTITY will be rejected by SQL Server. In order for thevalue to be accepted, a session-level option “SET IDENTITY_INSERT” must beenabled. The SQLAlchemy SQL Server dialect will perform this operationautomatically when using a core Insert
construct; if theexecution specifies a value for the IDENTITY column, the “IDENTITY_INSERT”option will be enabled for the span of that statement’s invocation.However,this scenario is not high performing and should not be relied upon fornormal use. If a table doesn’t actually require IDENTITY behavior in itsinteger primary key column, the keyword should be disabled when creatingthe table by ensuring that autoincrement=False
is set.
Controlling “Start” and “Increment”
Specific control over the “start” and “increment” values forthe IDENTITY
generator are provided using themssql_identity_start
and mssql_identity_increment
parameterspassed to the Column
object:
- from sqlalchemy import Table, Integer, Column
- test = Table(
- 'test', metadata,
- Column(
- 'id', Integer, primary_key=True, mssql_identity_start=100,
- mssql_identity_increment=10
- ),
- Column('name', String(20))
- )
The CREATE TABLE for the above Table
object would be:
- CREATE TABLE test (
- id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
- name VARCHAR(20) NULL,
- )
Changed in version 1.3: The mssql_identity_start
andmssql_identity_increment
parameters are now used to affect theIDENTITY
generator for a Column
under SQL Server.Previously, the Sequence
object was used. As SQL Server nowsupports real sequences as a separate construct, Sequence
will befunctional in the normal way in a future SQLAlchemy version.
INSERT behavior
Handling of the IDENTITY
column at INSERT time involves two keytechniques. The most common is being able to fetch the “last inserted value”for a given IDENTITY
column, a process which SQLAlchemy performsimplicitly in many cases, most importantly within the ORM.
The process for fetching this value has several variants:
- In the vast majority of cases, RETURNING is used in conjunction with INSERTstatements on SQL Server in order to get newly generated primary key values:
- INSERT INTO t (x) OUTPUT inserted.id VALUES (?)
When RETURNING is not available or has been disabled via
implicit_returning=False
, either thescope_identity()
function orthe@@identity
variable is used; behavior varies by backend:- when using PyODBC, the phrase
; select scope_identity()
will beappended to the end of the INSERT statement; a second result set will befetched in order to receive the value. Given a table as:
- when using PyODBC, the phrase
- t = Table('t', m, Column('id', Integer, primary_key=True),
- Column('x', Integer),
- implicit_returning=False)
an INSERT will look like:
- INSERT INTO t (x) VALUES (?); select scope_identity()
- Other dialects such as pymssql will call upon
SELECT scope_identity() AS lastrowid
subsequent to an INSERTstatement. If the flaguse_scope_identity=False
is passed tocreate_engine()
, the statementSELECT @@identity AS lastrowid
is used instead.
A table that contains an IDENTITY
column will prohibit an INSERT statementthat refers to the identity column explicitly. The SQLAlchemy dialect willdetect when an INSERT construct, created using a core insert()
construct (not a plain string SQL), refers to the identity column, andin this case will emit SET IDENTITY_INSERT ON
prior to the insertstatement proceeding, and SET IDENTITY_INSERT OFF
subsequent to theexecution. Given this example:
- m = MetaData()
- t = Table('t', m, Column('id', Integer, primary_key=True),
- Column('x', Integer))
- m.create_all(engine)
- engine.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})
The above column will be created with IDENTITY, however the INSERT statementwe emit is specifying explicit values. In the echo output we can seehow SQLAlchemy handles this:
- CREATE TABLE t (
- id INTEGER NOT NULL IDENTITY(1,1),
- x INTEGER NULL,
- PRIMARY KEY (id)
- )
- COMMIT
- SET IDENTITY_INSERT t ON
- INSERT INTO t (id, x) VALUES (?, ?)
- ((1, 1), (2, 2))
- SET IDENTITY_INSERT t OFF
- COMMIT
Thisis an auxiliary use case suitable for testing and bulk insert scenarios.
MAX on VARCHAR / NVARCHAR
SQL Server supports the special string “MAX” within thesqltypes.VARCHAR
and sqltypes.NVARCHAR
datatypes,to indicate “maximum length possible”. The dialect currently handles this asa length of “None” in the base type, rather than supplying adialect-specific version of these types, so that a base typespecified such as VARCHAR(None)
can assume “unlengthed” behavior onmore than one backend without using dialect-specific types.
To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None:
- my_table = Table(
- 'my_table', metadata,
- Column('my_data', VARCHAR(None)),
- Column('my_n_data', NVARCHAR(None))
- )
Collation Support
Character collations are supported by the base string types,specified by the string argument “collation”:
- from sqlalchemy import VARCHAR
- Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))
When such a column is associated with a Table
, theCREATE TABLE statement for this column will yield:
- login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL
LIMIT/OFFSET Support
MSSQL has no support for the LIMIT or OFFSET keywords. LIMIT issupported directly through the TOP
Transact SQL keyword:
- select.limit
will yield:
- SELECT TOP n
If using SQL Server 2005 or above, LIMIT with OFFSETsupport is available through the ROW_NUMBER OVER
construct.For versions below 2005, LIMIT with OFFSET usage will fail.
Transaction Isolation Level
All SQL Server dialects support setting of transaction isolation levelboth via a dialect-specific parametercreate_engine.isolation_level
accepted by create_engine()
,as well as the Connection.execution_options.isolation_level
argument as passed toConnection.execution_options()
. This feature works by issuing thecommand SET TRANSACTION ISOLATION LEVEL <level>
foreach new connection.
To set isolation level using create_engine()
:
- engine = create_engine(
- "mssql+pyodbc://scott:tiger@ms_2008",
- isolation_level="REPEATABLE READ"
- )
To set using per-connection execution options:
- connection = engine.connect()
- connection = connection.execution_options(
- isolation_level="READ COMMITTED"
- )
Valid values for isolation_level
include:
AUTOCOMMIT
- pyodbc / pymssql-specificREAD COMMITTED
READ UNCOMMITTED
REPEATABLE READ
SERIALIZABLE
SNAPSHOT
- specific to SQL Server
New in version 1.1: support for isolation level setting on MicrosoftSQL Server.
New in version 1.2: added AUTOCOMMIT isolation level setting
Nullability
MSSQL has support for three levels of column nullability. The defaultnullability allows nulls and is explicit in the CREATE TABLEconstruct:
- name VARCHAR(20) NULL
If nullable=None
is specified then no specification is made. Inother words the database’s configured default is used. This willrender:
- name VARCHAR(20)
If nullable
is True
or False
then the column will beNULL
or NOT NULL
respectively.
Date / Time Handling
DATE and TIME are supported. Bind parameters are convertedto datetime.datetime() objects as required by most MSSQL drivers,and results are processed from strings if needed.The DATE and TIME types are not available for MSSQL 2005 andprevious - if a server version below 2008 is detected, DDLfor these types will be issued as DATETIME.
Large Text/Binary Type Deprecation
PerSQL Server 2012/2014 Documentation,the NTEXT
, TEXT
and IMAGE
datatypes are to be removed from SQLServer in a future release. SQLAlchemy normally relates these types to theUnicodeText
, Text
and LargeBinary
datatypes.
In order to accommodate this change, a new flag deprecate_large_types
is added to the dialect, which will be automatically set based on detectionof the server version in use, if not otherwise set by the user. Thebehavior of this flag is as follows:
When this flag is
True
, theUnicodeText
,Text
andLargeBinary
datatypes, when used to render DDL, will render thetypesNVARCHAR(max)
,VARCHAR(max)
, andVARBINARY(max)
,respectively. This is a new behavior as of the addition of this flag.When this flag is
False
, theUnicodeText
,Text
andLargeBinary
datatypes, when used to render DDL, will render thetypesNTEXT
,TEXT
, andIMAGE
,respectively. This is the long-standing behavior of these types.The flag begins with the value
None
, before a database connection isestablished. If the dialect is used to render DDL without the flag beingset, it is interpreted the same asFalse
.On first connection, the dialect detects if SQL Server version 2012 orgreater is in use; if the flag is still at
None
, it sets it toTrue
orFalse
based on whether 2012 or greater is detected.The flag can be set to either
True
orFalse
when the dialectis created, typically viacreate_engine()
:
- eng = create_engine("mssql+pymssql://user:pass@host/db",
- deprecate_large_types=True)
- Complete control over whether the “old” or “new” types are rendered isavailable in all SQLAlchemy versions by using the UPPERCASE type objectsinstead:
NVARCHAR
,VARCHAR
,types.VARBINARY
,TEXT
,mssql.NTEXT
,mssql.IMAGE
will alwaysremain fixed and always output exactly that type.
New in version 1.0.0.
Multipart Schema Names
SQL Server schemas sometimes require multiple parts to their “schema”qualifier, that is, including the database name and owner name as separatetokens, such as mydatabase.dbo.some_table
. These multipart names can be setat once using the Table.schema
argument of Table
:
- Table(
- "some_table", metadata,
- Column("q", String(50)),
- schema="mydatabase.dbo"
- )
When performing operations such as table or component reflection, a schemaargument that contains a dot will be split into separate“database” and “owner” components in order to correctly query the SQLServer information schema tables, as these two values are stored separately.Additionally, when rendering the schema name for DDL or SQL, the twocomponents will be quoted separately for case sensitive names and otherspecial characters. Given an argument as below:
- Table(
- "some_table", metadata,
- Column("q", String(50)),
- schema="MyDataBase.dbo"
- )
The above schema would be rendered as [MyDataBase].dbo
, and also inreflection, would be reflected using “dbo” as the owner and “MyDataBase”as the database name.
To control how the schema name is broken into database / owner,specify brackets (which in SQL Server are quoting characters) in the name.Below, the “owner” will be considered as MyDataBase.dbo
and the“database” will be None:
- Table(
- "some_table", metadata,
- Column("q", String(50)),
- schema="[MyDataBase.dbo]"
- )
To individually specify both database and owner name with special charactersor embedded dots, use two sets of brackets:
- Table(
- "some_table", metadata,
- Column("q", String(50)),
- schema="[MyDataBase.Period].[MyOwner.Dot]"
- )
Changed in version 1.2: the SQL Server dialect now treats brackets asidentifier delimeters splitting the schema into separate databaseand owner tokens, to allow dots within either name itself.
Legacy Schema Mode
Very old versions of the MSSQL dialect introduced the behavior such that aschema-qualified table would be auto-aliased when used in aSELECT statement; given a table:
- account_table = Table(
- 'account', metadata,
- Column('id', Integer, primary_key=True),
- Column('info', String(100)),
- schema="customer_schema"
- )
this legacy mode of rendering would assume that “customer_schema.account”would not be accepted by all parts of the SQL statement, as illustratedbelow:
- >>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
- >>> print(account_table.select().compile(eng))
- SELECT account_1.id, account_1.info
- FROM customer_schema.account AS account_1
This mode of behavior is now off by default, as it appears to have servedno purpose; however in the case that legacy applications rely upon it,it is available using the legacy_schema_aliasing
argument tocreate_engine()
as illustrated above.
Changed in version 1.1: the legacy_schema_aliasing
flag introducedin version 1.0.5 to allow disabling of legacy mode for schemas nowdefaults to False.
Clustered Index Support
The MSSQL dialect supports clustered indexes (and primary keys) via themssql_clustered
option. This option is available to Index
,UniqueConstraint
. and PrimaryKeyConstraint
.
To generate a clustered index:
- Index("my_index", table.c.x, mssql_clustered=True)
which renders the index as CREATE CLUSTERED INDEX my_index ON table (x)
.
To generate a clustered primary key use:
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x", "y", mssql_clustered=True))
which will render the table, for example, as:
- CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
- PRIMARY KEY CLUSTERED (x, y))
Similarly, we can generate a clustered unique constraint using:
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x"),
- UniqueConstraint("y", mssql_clustered=True),
- )
To explicitly request a non-clustered primary key (for example, whena separate clustered index is desired), use:
- Table('my_table', metadata,
- Column('x', ...),
- Column('y', ...),
- PrimaryKeyConstraint("x", "y", mssql_clustered=False))
which will render the table, for example, as:
- CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
- PRIMARY KEY NONCLUSTERED (x, y))
Changed in version 1.1: the mssql_clustered
option now defaultsto None, rather than False. mssql_clustered=False
now explicitlyrenders the NONCLUSTERED clause, whereas None omits the CLUSTEREDclause entirely, allowing SQL Server defaults to take effect.
MSSQL-Specific Index Options
In addition to clustering, the MSSQL dialect supports other special optionsfor Index
.
INCLUDE
The mssql_include
option renders INCLUDE(colname) for the given stringnames:
- Index("my_index", table.c.x, mssql_include=['y'])
would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)
Filtered Indexes
The mssql_where
option renders WHERE(condition) for the given stringnames:
- Index("my_index", table.c.x, mssql_where=table.c.x > 10)
would render the index as CREATE INDEX my_index ON table (x) WHERE x > 10
.
New in version 1.3.4.
Index ordering
Index ordering is available via functional expressions, such as:
- Index("my_index", table.c.x.desc())
would render the index as CREATE INDEX my_index ON table (x DESC)
See also
Compatibility Levels
MSSQL supports the notion of setting compatibility levels at thedatabase level. This allows, for instance, to run a database thatis compatible with SQL2000 while running on a SQL2005 databaseserver. server_version_info
will always return the databaseserver version information (in this case SQL2005) and not thecompatibility level information. Because of this, if running undera backwards compatibility mode SQLAlchemy may attempt to use T-SQLstatements that are unable to be parsed by the database server.
Triggers
SQLAlchemy by default uses OUTPUT INSERTED to get at newlygenerated primary key values via IDENTITY columns or otherserver side defaults. MS-SQL does notallow the usage of OUTPUT INSERTED on tables that have triggers.To disable the usage of OUTPUT INSERTED on a per-table basis,specify implicit_returning=False
for each Table
which has triggers:
- Table('mytable', metadata,
- Column('id', Integer, primary_key=True),
- # ...,
- implicit_returning=False
- )
Declarative form:
- class MyClass(Base):
- # ...
- __table_args__ = {'implicit_returning':False}
This option can also be specified engine-wide using theimplicit_returning=False
argument on create_engine()
.
Rowcount Support / ORM Versioning
The SQL Server drivers may have limited ability to return the numberof rows updated from an UPDATE or DELETE statement.
As of this writing, the PyODBC driver is not able to return a rowcount whenOUTPUT INSERTED is used. This impacts the SQLAlchemy ORM’s versioning featurein many cases where server-side value generators are in use in that while theversioning operations can succeed, the ORM cannot always check that an UPDATEor DELETE statement matched the number of rows expected, which is how itverifies that the version identifier matched. When this condition occurs, awarning will be emitted but the operation will proceed.
The use of OUTPUT INSERTED can be disabled by setting theTable.implicit_returning
flag to False
on a particularTable
, which in declarative looks like:
- class MyTable(Base):
- __tablename__ = 'mytable'
- id = Column(Integer, primary_key=True)
- stuff = Column(String(10))
- timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
- __mapper_args__ = {
- 'version_id_col': timestamp,
- 'version_id_generator': False,
- }
- __table_args__ = {
- 'implicit_returning': False
- }
Enabling Snapshot Isolation
SQL Server has a default transactionisolation mode that locks entire tables, and causes even mildly concurrentapplications to have long held locks and frequent deadlocks.Enabling snapshot isolation for the database as a whole is recommendedfor modern levels of concurrency support. This is accomplished via thefollowing ALTER DATABASE commands executed at the SQL prompt:
- ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
- ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
Background on SQL Server snapshot isolation is available athttp://msdn.microsoft.com/en-us/library/ms175095.aspx.
SQL Server SQL Constructs
TryCast
is a subclass of SQLAlchemy’s Cast
construct, and works in the same way, except that the SQL expressionrendered is “TRY_CAST” rather than “CAST”:
- from sqlalchemy import select
- from sqlalchemy import Numeric
- from sqlalchemy.dialects.mssql import try_cast
- stmt = select([
- try_cast(product_table.c.unit_price, Numeric(10, 4))
- ])
The above would render:
- SELECT TRY_CAST (product_table.unit_price AS NUMERIC(10, 4))
- FROM product_table
New in version 1.3.7.
SQL Server Data Types
As with all SQLAlchemy dialects, all UPPERCASE types that are known to bevalid with SQL server are importable from the top level dialect, whetherthey originate from sqlalchemy.types
or from the local dialect:
- from sqlalchemy.dialects.mssql import \
- BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
- DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \
- NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
- SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
- TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR
Types which are specific to SQL Server, or have SQL Server-specificconstruction arguments, are as follows:
- class
sqlalchemy.dialects.mssql.
BIT
Bases:
sqlalchemy.types.TypeEngine
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
CHAR
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False) - Bases:
sqlalchemy.types.String
The SQL CHAR type.
init
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
inherited from theinit()
method ofString
Create a string-holding type.
- Parameters
-
-
length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception whenthe CREATE TABLE
DDL is issued if a VARCHAR
with no length is included. Whether the value isinterpreted as bytes or characters is database specific.
-
Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:
- >>> from sqlalchemy import cast, select, String
- >>> print select([cast('some string', String(collation='utf8'))])
- SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
-
When set to True
, theString
type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding
parameterparameter passed to create_engine()
as the encoding.
Deprecated since version 1.3: The String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.
For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force"
can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.
Note
SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode
flag is inherently alegacy feature.
Note
In the vast majority of cases, the Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.
See also
create_engine.convert_unicode
-Engine
-wide parameter
-
Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors
keyword argument tothe standard library’s string.decode()
functions, requiresthat String.convert_unicode
is set to"force"
Deprecated since version 1.3: The String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
- class
sqlalchemy.dialects.mssql.
DATETIME2
(precision=None, **kw) Bases:
sqlalchemy.dialects.mssql.base._DateTimeBase
,sqlalchemy.types.DateTime
class
sqlalchemy.dialects.mssql.
DATETIMEOFFSET
(precision=None, **kwargs)Bases:
sqlalchemy.types.TypeEngine
Bases:
sqlalchemy.types.LargeBinary
inherited from theinit()
method ofLargeBinary
Construct a LargeBinary type.
- Parameters
-
length – optional, a length for the column for use inDDL statements, for those binary types that accept a length,such as the MySQL BLOB type.
- class
sqlalchemy.dialects.mssql.
MONEY
Bases:
sqlalchemy.types.TypeEngine
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
NCHAR
(length=None, **kwargs) - Bases:
sqlalchemy.types.Unicode
The SQL NCHAR type.
inherited from theinit()
method ofUnicode
Create a Unicode
object.
Parameters are the same as that of String
,with the exception that convert_unicode
defaults to True
.
- class
sqlalchemy.dialects.mssql.
NTEXT
(length=None, **kwargs) - Bases:
sqlalchemy.types.UnicodeText
MSSQL NTEXT type, for variable-length unicode text up to 2^30characters.
inherited from theinit()
method ofUnicodeText
Create a Unicode-converting Text type.
Parameters are the same as that of Text
,with the exception that convert_unicode
defaults to True
.
- class
sqlalchemy.dialects.mssql.
NVARCHAR
(length=None, **kwargs) - Bases:
sqlalchemy.types.Unicode
The SQL NVARCHAR type.
inherited from theinit()
method ofUnicode
Create a Unicode
object.
Parameters are the same as that of String
,with the exception that convert_unicode
defaults to True
.
- class
sqlalchemy.dialects.mssql.
REAL
(**kw) Bases:
sqlalchemy.types.REAL
init
(**kw)Construct a Float.
- Parameters
precision – the numeric precision for use in DDL
CREATETABLE
.asdecimal – the same flag as that of
Numeric
, butdefaults toFalse
. Note that setting this flag toTrue
results in floating point conversion.
Default scale to use when convertingfrom floats to Python decimals. Floating point values will typicallybe much longer due to decimal inaccuracy, and most floating pointdatabase types don’t have a notion of “scale”, so by default thefloat type looks for the first ten decimal places when converting.Specifying this value will override that length. Note that theMySQL float types, which do include “scale”, will use “scale”as the default for decimal_return_scale, if not otherwise specified.
New in version 0.9.0.
- class
sqlalchemy.dialects.mssql.
ROWVERSION
(convert_int=False) - Bases:
sqlalchemy.dialects.mssql.base.TIMESTAMP
Implement the SQL Server ROWVERSION type.
The ROWVERSION datatype is a SQL Server synonym for the TIMESTAMPdatatype, however current SQL Server documentation suggests usingROWVERSION for new datatypes going forward.
The ROWVERSION datatype does not reflect (e.g. introspect) from thedatabase as itself; the returned datatype will bemssql.TIMESTAMP
.
This is a read-only datatype that does not support INSERT of values.
New in version 1.2.
See also
inherited from the init()
method of TIMESTAMP
Construct a TIMESTAMP or ROWVERSION type.
- Parameters
-
convert_int – if True, binary integer values willbe converted to integers on read.
New in version 1.2.
- class
sqlalchemy.dialects.mssql.
SMALLDATETIME
(timezone=False) Bases:
sqlalchemy.dialects.mssql.base._DateTimeBase
,sqlalchemy.types.DateTime
inherited from theinit()
method ofDateTime
Construct a new DateTime
.
- Parameters
-
timezone – boolean. Indicates that the datetime type shouldenable timezone support, if available on thebase date/time-holding type only. It is recommendedto make use of the TIMESTAMP
datatype directly whenusing this flag, as some databases include separate genericdate/time-holding types distinct from the timezone-capableTIMESTAMP datatype, such as Oracle.
- class
sqlalchemy.dialects.mssql.
SMALLMONEY
Bases:
sqlalchemy.types.TypeEngine
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
SQL_VARIANT
Bases:
sqlalchemy.types.TypeEngine
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
TEXT
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False) - Bases:
sqlalchemy.types.Text
The SQL TEXT type.
init
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
inherited from theinit()
method ofString
Create a string-holding type.
- Parameters
-
-
length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception whenthe CREATE TABLE
DDL is issued if a VARCHAR
with no length is included. Whether the value isinterpreted as bytes or characters is database specific.
-
Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:
- >>> from sqlalchemy import cast, select, String
- >>> print select([cast('some string', String(collation='utf8'))])
- SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
-
When set to True
, theString
type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding
parameterparameter passed to create_engine()
as the encoding.
Deprecated since version 1.3: The String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.
For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force"
can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.
Note
SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode
flag is inherently alegacy feature.
Note
In the vast majority of cases, the Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.
See also
create_engine.convert_unicode
-Engine
-wide parameter
-
Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors
keyword argument tothe standard library’s string.decode()
functions, requiresthat String.convert_unicode
is set to"force"
Deprecated since version 1.3: The String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
- class
sqlalchemy.dialects.mssql.
TIME
(precision=None, **kwargs) Bases:
sqlalchemy.types.TIME
class
sqlalchemy.dialects.mssql.
TIMESTAMP
(convert_int=False)- Bases:
sqlalchemy.types._Binary
Implement the SQL Server TIMESTAMP type.
Note this is completely different than the SQL StandardTIMESTAMP type, which is not supported by SQL Server. Itis a read-only datatype that does not support INSERT of values.
New in version 1.2.
See also
New in version 1.2.
- class
sqlalchemy.dialects.mssql.
TINYINT
Bases:
sqlalchemy.types.Integer
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
UNIQUEIDENTIFIER
Bases:
sqlalchemy.types.TypeEngine
inherited from the init()
method of object
Initialize self. See help(type(self)) for accurate signature.
- class
sqlalchemy.dialects.mssql.
VARCHAR
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False) - Bases:
sqlalchemy.types.String
The SQL VARCHAR type.
init
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
inherited from theinit()
method ofString
Create a string-holding type.
- Parameters
-
-
length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception whenthe CREATE TABLE
DDL is issued if a VARCHAR
with no length is included. Whether the value isinterpreted as bytes or characters is database specific.
-
Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:
- >>> from sqlalchemy import cast, select, String
- >>> print select([cast('some string', String(collation='utf8'))])
- SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
-
When set to True
, theString
type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding
parameterparameter passed to create_engine()
as the encoding.
Deprecated since version 1.3: The String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.
For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force"
can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.
Note
SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode
flag is inherently alegacy feature.
Note
In the vast majority of cases, the Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.
See also
create_engine.convert_unicode
-Engine
-wide parameter
-
Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors
keyword argument tothe standard library’s string.decode()
functions, requiresthat String.convert_unicode
is set to"force"
Deprecated since version 1.3: The String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
- class
sqlalchemy.dialects.mssql.
XML
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False) - Bases:
sqlalchemy.types.Text
MSSQL XML type.
This is a placeholder type for reflection purposes that does not includeany Python-side datatype support. It also does not currently supportadditional arguments, such as “CONTENT”, “DOCUMENT”,“xml_schema_collection”.
New in version 1.1.11.
init
(length=None, collation=None, convert_unicode=False, unicode_error=None, warnon_bytestring=False, expectunicode=False)
inherited from theinit()
method ofString
Create a string-holding type.
- Parameters
-
-
length – optional, a length for the column for use inDDL and CAST expressions. May be safely omitted if no CREATETABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception whenthe CREATE TABLE
DDL is issued if a VARCHAR
with no length is included. Whether the value isinterpreted as bytes or characters is database specific.
-
Optional, a column-level collation foruse in DDL and CAST expressions. Renders using theCOLLATE keyword supported by SQLite, MySQL, and PostgreSQL.E.g.:
- >>> from sqlalchemy import cast, select, String
- >>> print select([cast('some string', String(collation='utf8'))])
- SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
-
When set to True
, theString
type will assume thatinput is to be passed as Python Unicode objects under Python 2,and results returned as Python Unicode objects.In the rare circumstance that the DBAPI does not supportPython unicode under Python 2, SQLAlchemy will use its ownencoder/decoder functionality on strings, referring to thevalue of the create_engine.encoding
parameterparameter passed to create_engine()
as the encoding.
Deprecated since version 1.3: The String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.
For the extremely rare case that Python Unicodeis to be encoded/decoded by SQLAlchemy on a backendthat does natively support Python Unicode,the string value "force"
can be passed here which willcause SQLAlchemy’s encode/decode services to beused unconditionally.
Note
SQLAlchemy’s unicode-conversion flags and features only applyto Python 2; in Python 3, all string objects are Unicode objects.For this reason, as well as the fact that virtually all modernDBAPIs now support Unicode natively even under Python 2,the String.convert_unicode
flag is inherently alegacy feature.
Note
In the vast majority of cases, the Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. Thesedatatypes will ensure that the correct types are used on thedatabase side as well as set up the correct Unicode behaviorsunder Python 2.
See also
create_engine.convert_unicode
-Engine
-wide parameter
-
Optional, a method to use to handle Unicodeconversion errors. Behaves like the errors
keyword argument tothe standard library’s string.decode()
functions, requiresthat String.convert_unicode
is set to"force"
Deprecated since version 1.3: The String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
PyODBC
Support for the Microsoft SQL Server database via the PyODBC driver.
DBAPI
Documentation and download information (if applicable) for PyODBC is available at:http://pypi.python.org/pypi/pyodbc/
Connecting
Connect String:
- mssql+pyodbc://<username>:<password>@<dsnname>
Connecting to PyODBC
The URL here is to be translated to PyODBC connection strings, asdetailed in ConnectionStrings.
DSN Connections
A DSN-based connection is preferred overall when using ODBC. Abasic DSN-based connection looks like:
- engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")
Which above, will pass the following connection string to PyODBC:
- dsn=mydsn;UID=user;PWD=pass
If the username and password are omitted, the DSN form will also addthe Trusted_Connection=yes
directive to the ODBC string.
Hostname Connections
Hostname-based connections are not preferred, however are supported.The ODBC driver name must be explicitly specified:
- engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=SQL+Server+Native+Client+10.0")
Changed in version 1.0.0: Hostname-based PyODBC connections now require theSQL Server driver name specified explicitly. SQLAlchemy cannotchoose an optimal default here as it varies based on platformand installed drivers.
Other keywords interpreted by the Pyodbc dialect to be passed topyodbc.connect()
in both the DSN and hostname cases include:odbc_autotranslate
, ansi
, unicode_results
, autocommit
.
Pass through exact Pyodbc string
A PyODBC connection string can also be sent exactly as specified inConnectionStringsinto the driver using the parameter odbc_connect
. The delimeters must beURL escaped, however, as illustrated below using urllib.parse.quote_plus
:
- import urllib
- params = urllib.parse.quote_plus("DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password")
- engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
Driver / Unicode Support
PyODBC works best with Microsoft ODBC drivers, particularly in the areaof Unicode support on both Python 2 and Python 3.
Using the FreeTDS ODBC drivers on Linux or OSX with PyODBC is notrecommended; there have been historically many Unicode-related issuesin this area, including before Microsoft offered ODBC drivers for Linuxand OSX. Now that Microsoft offers drivers for all platforms, forPyODBC support these are recommended. FreeTDS remains relevant fornon-ODBC drivers such as pymssql where it works very well.
Rowcount Support
Pyodbc only has partial support for rowcount. See the notes atRowcount Support / ORM Versioning for important notes when using ORMversioning.
Fast Executemany Mode
The Pyodbc driver has added support for a “fast executemany” mode of executionwhich greatly reduces round trips for a DBAPI executemany()
call when usingMicrosoft ODBC drivers. The feature is enabled by setting the flag.fast_executemany
on the DBAPI cursor when an executemany call is to beused. The SQLAlchemy pyodbc SQL Server dialect supports setting this flagautomatically when the .fast_executemany
flag is passed tocreate_engine()
; note that the ODBC driver must be the Microsoft driverin order to use this flag:
- engine = create_engine(
- "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server",
- fast_executemany=True)
New in version 1.3.
See also
fast executemany- on github
mxODBC
Support for the Microsoft SQL Server database via the mxODBC driver.
DBAPI
Documentation and download information (if applicable) for mxODBC is available at:http://www.egenix.com/
Connecting
Connect String:
- mssql+mxodbc://<username>:<password>@<dsnname>
Execution Modes
mxODBC features two styles of statement execution, using thecursor.execute()
and cursor.executedirect()
methods (the second beingan extension to the DBAPI specification). The former makes use of a particularAPI call specific to the SQL Server Native Client ODBC driver knownSQLDescribeParam, while the latter does not.
mxODBC apparently only makes repeated use of a single prepared statementwhen SQLDescribeParam is used. The advantage to prepared statement reuse isone of performance. The disadvantage is that SQLDescribeParam has a limitedset of scenarios in which bind parameters are understood, including that theycannot be placed within the argument lists of function calls, anywhere outsidethe FROM, or even within subqueries within the FROM clause - making the usageof bind parameters within SELECT statements impossible for all but the mostsimplistic statements.
For this reason, the mxODBC dialect uses the “native” mode by default only forINSERT, UPDATE, and DELETE statements, and uses the escaped string mode forall other statements.
This behavior can be controlled viaexecution_options()
using thenative_odbc_execute
flag with a value of True
or False
, where avalue of True
will unconditionally use native bind parameters and a valueof False
will unconditionally use string-escaped parameters.
pymssql
Support for the Microsoft SQL Server database via the pymssql driver.
DBAPI
Documentation and download information (if applicable) for pymssql is available at:http://pymssql.org/
Connecting
Connect String:
- mssql+pymssql://<username>:<password>@<freetds_name>/?charset=utf8
pymssql is a Python module that provides a Python DBAPI interface aroundFreeTDS. Compatible builds are available forLinux, MacOSX and Windows platforms.
Modern versions of this driver work very well with SQL Server andFreeTDS from Linux and is highly recommended.
zxjdbc
Support for the Microsoft SQL Server database via the zxJDBC for Jython driver.
Note
Jython is not supported by current versions of SQLAlchemy. Thezxjdbc dialect should be considered as experimental.
DBAPI
Drivers for this database are available at:http://jtds.sourceforge.net/
Connecting
Connect String:
- mssql+zxjdbc://user:pass@host:port/dbname[?key=value&key=value...]
AdoDBAPI
Support for the Microsoft SQL Server database via the adodbapi driver.
DBAPI
Documentation and download information (if applicable) for adodbapi is available at:http://adodbapi.sourceforge.net/
Connecting
Connect String:
- mssql+adodbapi://<username>:<password>@<dsnname>
Note
The adodbapi dialect is not implemented in SQLAlchemy versions 0.6 andabove at this time.