- Oracle
- DBAPI Support
- Auto Increment Behavior
- Transaction Isolation Level / Autocommit
- Identifier Casing
- Max Identifier Lengths
- LIMIT/OFFSET Support
- RETURNING Support
- ON UPDATE CASCADE
- Oracle 8 Compatibility
- Synonym/DBLINK Reflection
- Constraint Reflection
- Table names with SYSTEM/SYSAUX tablespaces
- DateTime Compatibility
- Oracle Table Options
- Oracle Specific Index Options
- Oracle Data Types
- cx_Oracle
- DBAPI
- Connecting
- DSN vs. Hostname connections
- Passing cx_Oracle connect arguments
- Options consumed by the SQLAlchemy cx_Oracle dialect outside of the driver
- Using cx_Oracle SessionPool
- Unicode
- Fine grained control over cx_Oracle data binding performance with setinputsizes
- RETURNING Support
- LOB Objects
- Two Phase Transactions Not Supported
- Precision Numerics
Oracle
Support for the Oracle database.
The following table summarizes current support levels for database release versions.
Support type | Versions |
---|---|
11.2, 18c | |
11+ | |
8+ |
DBAPI Support
The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.
Auto Increment Behavior
SQLAlchemy Table objects which include integer primary keys are usually assumed to have “autoincrementing” behavior, meaning they can generate their own primary key values upon INSERT. For use within Oracle, two options are available, which are the use of IDENTITY columns (Oracle 12 and above only) or the association of a SEQUENCE with the column.
Specifying GENERATED AS IDENTITY (Oracle 12 and above)
Starting from version 12 Oracle can make use of identity columns using the Identity
to specify the autoincrementing behavior:
t = Table('mytable', metadata,
Column('id', Integer, Identity(start=3), primary_key=True),
Column(...), ...
)
The CREATE TABLE for the above Table
object would be:
CREATE TABLE mytable (
id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 3),
...,
PRIMARY KEY (id)
)
The Identity
object support many options to control the “autoincrementing” behavior of the column, like the starting value, the incrementing value, etc. In addition to the standard options, Oracle supports setting Identity.always
to None
to use the default generated mode, rendering GENERATED AS IDENTITY in the DDL. It also supports setting Identity.on_null
to True
to specify ON NULL in conjunction with a ‘BY DEFAULT’ identity column.
Using a SEQUENCE (all Oracle versions)
Older version of Oracle had no “autoincrement” feature, SQLAlchemy relies upon sequences to produce these values. With the older Oracle versions, a sequence must always be explicitly specified to enable autoincrement. This is divergent with the majority of documentation examples which assume the usage of an autoincrement-capable database. To specify sequences, use the sqlalchemy.schema.Sequence object which is passed to a Column construct:
t = Table('mytable', metadata,
Column('id', Integer, Sequence('id_seq'), primary_key=True),
Column(...), ...
)
This step is also required when using table reflection, i.e. autoload_with=engine:
t = Table('mytable', metadata,
Column('id', Integer, Sequence('id_seq'), primary_key=True),
autoload_with=engine
)
Changed in version 1.4: Added Identity
construct in a Column
to specify the option of an autoincrementing column.
Transaction Isolation Level / Autocommit
The Oracle database supports “READ COMMITTED” and “SERIALIZABLE” modes of isolation. The AUTOCOMMIT isolation level is also supported by the cx_Oracle dialect.
To set using per-connection execution options:
connection = engine.connect()
connection = connection.execution_options(
isolation_level="AUTOCOMMIT"
)
For READ COMMITTED
and SERIALIZABLE
, the Oracle dialect sets the level at the session level using ALTER SESSION
, which is reverted back to its default setting when the connection is returned to the connection pool.
Valid values for isolation_level
include:
READ COMMITTED
AUTOCOMMIT
SERIALIZABLE
Note
The implementation for the Connection.get_isolation_level()
method as implemented by the Oracle dialect necessarily forces the start of a transaction using the Oracle LOCAL_TRANSACTION_ID function; otherwise no level is normally readable.
Additionally, the Connection.get_isolation_level()
method will raise an exception if the v$transaction
view is not available due to permissions or other reasons, which is a common occurrence in Oracle installations.
The cx_Oracle dialect attempts to call the Connection.get_isolation_level()
method when the dialect makes its first connection to the database in order to acquire the “default”isolation level. This default level is necessary so that the level can be reset on a connection after it has been temporarily modified using Connection.execution_options()
method. In the common event that the Connection.get_isolation_level()
method raises an exception due to v$transaction
not being readable as well as any other database-related failure, the level is assumed to be “READ COMMITTED”. No warning is emitted for this initial first-connect condition as it is expected to be a common restriction on Oracle databases.
New in version 1.3.16: added support for AUTOCOMMIT to the cx_oracle dialect as well as the notion of a default isolation level
New in version 1.3.21: Added support for SERIALIZABLE as well as live reading of the isolation level.
Changed in version 1.3.22: In the event that the default isolation level cannot be read due to permissions on the v$transaction view as is common in Oracle installations, the default isolation level is hardcoded to “READ COMMITTED” which was the behavior prior to 1.3.21.
See also
Setting Transaction Isolation Levels including DBAPI Autocommit
Identifier Casing
In Oracle, the data dictionary represents all case insensitive identifier names using UPPERCASE text. SQLAlchemy on the other hand considers an all-lower case identifier name to be case insensitive. The Oracle dialect converts all case insensitive identifiers to and from those two formats during schema level communication, such as reflection of tables and indexes. Using an UPPERCASE name on the SQLAlchemy side indicates a case sensitive identifier, and SQLAlchemy will quote the name - this will cause mismatches against data dictionary data received from Oracle, so unless identifier names have been truly created as case sensitive (i.e. using quoted names), all lowercase names should be used on the SQLAlchemy side.
Max Identifier Lengths
Oracle has changed the default max identifier length as of Oracle Server version 12.2. Prior to this version, the length was 30, and for 12.2 and greater it is now 128. This change impacts SQLAlchemy in the area of generated SQL label names as well as the generation of constraint names, particularly in the case where the constraint naming convention feature described at Configuring Constraint Naming Conventions is being used.
To assist with this change and others, Oracle includes the concept of a “compatibility” version, which is a version number that is independent of the actual server version in order to assist with migration of Oracle databases, and may be configured within the Oracle server itself. This compatibility version is retrieved using the query SELECT value FROM v$parameter WHERE name = 'compatible';
. The SQLAlchemy Oracle dialect, when tasked with determining the default max identifier length, will attempt to use this query upon first connect in order to determine the effective compatibility version of the server, which determines what the maximum allowed identifier length is for the server. If the table is not available, the server version information is used instead.
As of SQLAlchemy 1.4, the default max identifier length for the Oracle dialect is 128 characters. Upon first connect, the compatibility version is detected and if it is less than Oracle version 12.2, the max identifier length is changed to be 30 characters. In all cases, setting the create_engine.max_identifier_length
parameter will bypass this change and the value given will be used as is:
engine = create_engine(
"oracle+cx_oracle://scott:tiger@oracle122",
max_identifier_length=30)
The maximum identifier length comes into play both when generating anonymized SQL labels in SELECT statements, but more crucially when generating constraint names from a naming convention. It is this area that has created the need for SQLAlchemy to change this default conservatively. For example, the following naming convention produces two very different constraint names based on the identifier length:
from sqlalchemy import Column
from sqlalchemy import Index
from sqlalchemy import Integer
from sqlalchemy import MetaData
from sqlalchemy import Table
from sqlalchemy.dialects import oracle
from sqlalchemy.schema import CreateIndex
m = MetaData(naming_convention={"ix": "ix_%(column_0N_name)s"})
t = Table(
"t",
m,
Column("some_column_name_1", Integer),
Column("some_column_name_2", Integer),
Column("some_column_name_3", Integer),
)
ix = Index(
None,
t.c.some_column_name_1,
t.c.some_column_name_2,
t.c.some_column_name_3,
)
oracle_dialect = oracle.dialect(max_identifier_length=30)
print(CreateIndex(ix).compile(dialect=oracle_dialect))
With an identifier length of 30, the above CREATE INDEX looks like:
CREATE INDEX ix_some_column_name_1s_70cd ON t
(some_column_name_1, some_column_name_2, some_column_name_3)
However with length=128, it becomes:
CREATE INDEX ix_some_column_name_1some_column_name_2some_column_name_3 ON t
(some_column_name_1, some_column_name_2, some_column_name_3)
Applications which have run versions of SQLAlchemy prior to 1.4 on an Oracle server version 12.2 or greater are therefore subject to the scenario of a database migration that wishes to “DROP CONSTRAINT” on a name that was previously generated with the shorter length. This migration will fail when the identifier length is changed without the name of the index or constraint first being adjusted. Such applications are strongly advised to make use of create_engine.max_identifier_length
in order to maintain control of the generation of truncated names, and to fully review and test all database migrations in a staging environment when changing this value to ensure that the impact of this change has been mitigated.
Changed in version 1.4: the default max_identifier_length for Oracle is 128 characters, which is adjusted down to 30 upon first connect if an older version of Oracle server (compatibility version < 12.2) is detected.
LIMIT/OFFSET Support
Oracle has no direct support for LIMIT and OFFSET until version 12c. To achieve this behavior across all widely used versions of Oracle starting with the 8 series, SQLAlchemy currently makes use of ROWNUM to achieve LIMIT/OFFSET; the exact methodology is taken from https://blogs.oracle.com/oraclemagazine/on-rownum-and-limiting-results .
There is currently a single option to affect its behavior:
- the “FIRST_ROWS()” optimization keyword is not used by default. To enable the usage of this optimization directive, specify
optimize_limits=True
tocreate_engine()
.
Changed in version 1.4: The Oracle dialect renders limit/offset integer values using a “post compile” scheme which renders the integer directly before passing the statement to the cursor for execution. The use_binds_for_limits
flag no longer has an effect.
See also
New “post compile” bound parameters used for LIMIT/OFFSET in Oracle, SQL Server.
Support for changing the row number strategy, which would include one that makes use of the row_number()
window function as well as one that makes use of the Oracle 12c “FETCH FIRST N ROW / OFFSET N ROWS” keywords may be added in a future release.
RETURNING Support
The Oracle database supports a limited form of RETURNING, in order to retrieve result sets of matched rows from INSERT, UPDATE and DELETE statements. Oracle’s RETURNING..INTO syntax only supports one row being returned, as it relies upon OUT parameters in order to function. In addition, supported DBAPIs have further limitations (see RETURNING Support).
SQLAlchemy’s “implicit returning” feature, which employs RETURNING within an INSERT and sometimes an UPDATE statement in order to fetch newly generated primary key values and other SQL defaults and expressions, is normally enabled on the Oracle backend. By default, “implicit returning” typically only fetches the value of a single nextval(some_seq)
expression embedded into an INSERT in order to increment a sequence within an INSERT statement and get the value back at the same time. To disable this feature across the board, specify implicit_returning=False
to create_engine()
:
engine = create_engine("oracle://scott:tiger@dsn",
implicit_returning=False)
Implicit returning can also be disabled on a table-by-table basis as a table option:
# Core Table
my_table = Table("my_table", metadata, ..., implicit_returning=False)
# declarative
class MyClass(Base):
__tablename__ = 'my_table'
__table_args__ = {"implicit_returning": False}
See also
RETURNING Support - additional cx_oracle-specific restrictions on implicit returning.
ON UPDATE CASCADE
Oracle doesn’t have native ON UPDATE CASCADE functionality. A trigger based solution is available at http://asktom.oracle.com/tkyte/update_cascade/index.html .
When using the SQLAlchemy ORM, the ORM has limited ability to manually issue cascading updates - specify ForeignKey objects using the “deferrable=True, initially=’deferred’” keyword arguments, and specify “passive_updates=False” on each relationship().
Oracle 8 Compatibility
When Oracle 8 is detected, the dialect internally configures itself to the following behaviors:
the use_ansi flag is set to False. This has the effect of converting all JOIN phrases into the WHERE clause, and in the case of LEFT OUTER JOIN makes use of Oracle’s (+) operator.
the NVARCHAR2 and NCLOB datatypes are no longer generated as DDL when the
Unicode
is used - VARCHAR2 and CLOB are issued instead. This because these types don’t seem to work correctly on Oracle 8 even though they are available. TheNVARCHAR
andNCLOB
types will always generate NVARCHAR2 and NCLOB.the “native unicode” mode is disabled when using cx_oracle, i.e. SQLAlchemy encodes all Python unicode objects to “string” before passing in as bind parameters.
Synonym/DBLINK Reflection
When using reflection with Table objects, the dialect can optionally search for tables indicated by synonyms, either in local or remote schemas or accessed over DBLINK, by passing the flag oracle_resolve_synonyms=True
as a keyword argument to the Table
construct:
some_table = Table('some_table', autoload_with=some_engine,
oracle_resolve_synonyms=True)
When this flag is set, the given name (such as some_table
above) will be searched not just in the ALL_TABLES
view, but also within the ALL_SYNONYMS
view to see if this name is actually a synonym to another name. If the synonym is located and refers to a DBLINK, the oracle dialect knows how to locate the table’s information using DBLINK syntax(e.g. @dblink
).
oracle_resolve_synonyms
is accepted wherever reflection arguments are accepted, including methods such as MetaData.reflect()
and Inspector.get_columns()
.
If synonyms are not in use, this flag should be left disabled.
Constraint Reflection
The Oracle dialect can return information about foreign key, unique, and CHECK constraints, as well as indexes on tables.
Raw information regarding these constraints can be acquired using Inspector.get_foreign_keys()
, Inspector.get_unique_constraints()
, Inspector.get_check_constraints()
, and Inspector.get_indexes()
.
Changed in version 1.2: The Oracle dialect can now reflect UNIQUE and CHECK constraints.
When using reflection at the Table
level, the Table
will also include these constraints.
Note the following caveats:
When using the
Inspector.get_check_constraints()
method, Oracle builds a special “IS NOT NULL” constraint for columns that specify “NOT NULL”. This constraint is not returned by default; to include the “IS NOT NULL” constraints, pass the flaginclude_all=True
:from sqlalchemy import create_engine, inspect
engine = create_engine("oracle+cx_oracle://s:t@dsn")
inspector = inspect(engine)
all_check_constraints = inspector.get_check_constraints(
"some_table", include_all=True)
in most cases, when reflecting a
Table
, a UNIQUE constraint will not be available as aUniqueConstraint
object, as Oracle mirrors unique constraints with a UNIQUE index in most cases (the exception seems to be when two or more unique constraints represent the same columns); theTable
will instead represent these usingIndex
with theunique=True
flag set.Oracle creates an implicit index for the primary key of a table; this index is excluded from all index results.
the list of columns reflected for an index will not include column names that start with SYS_NC.
Table names with SYSTEM/SYSAUX tablespaces
The Inspector.get_table_names()
and Inspector.get_temp_table_names()
methods each return a list of table names for the current engine. These methods are also part of the reflection which occurs within an operation such as MetaData.reflect()
. By default, these operations exclude the SYSTEM
and SYSAUX
tablespaces from the operation. In order to change this, the default list of tablespaces excluded can be changed at the engine level using the exclude_tablespaces
parameter:
# exclude SYSAUX and SOME_TABLESPACE, but not SYSTEM
e = create_engine(
"oracle://scott:tiger@xe",
exclude_tablespaces=["SYSAUX", "SOME_TABLESPACE"])
New in version 1.1.
DateTime Compatibility
Oracle has no datatype known as DATETIME
, it instead has only DATE
, which can actually store a date and time value. For this reason, the Oracle dialect provides a type DATE
which is a subclass of DateTime
. This type has no special behavior, and is only present as a “marker” for this type; additionally, when a database column is reflected and the type is reported as DATE
, the time-supporting DATE
type is used.
Changed in version 0.9.4: Added DATE
to subclass DateTime
. This is a change as previous versions would reflect a DATE
column as DATE
, which subclasses Date
. The only significance here is for schemes that are examining the type of column for use in special Python translations or for migrating schemas to other database backends.
Oracle Table Options
The CREATE TABLE phrase supports the following options with Oracle in conjunction with the Table
construct:
ON COMMIT
:Table(
"some_table", metadata, ...,
prefixes=['GLOBAL TEMPORARY'], oracle_on_commit='PRESERVE ROWS')
New in version 1.0.0.
COMPRESS
:Table('mytable', metadata, Column('data', String(32)),
oracle_compress=True)
Table('mytable', metadata, Column('data', String(32)),
oracle_compress=6)
The ``oracle_compress`` parameter accepts either an integer compression
level, or ``True`` to use the default compression level.
New in version 1.0.0.
Oracle Specific Index Options
Bitmap Indexes
You can specify the oracle_bitmap
parameter to create a bitmap index instead of a B-tree index:
Index('my_index', my_table.c.data, oracle_bitmap=True)
Bitmap indexes cannot be unique and cannot be compressed. SQLAlchemy will not check for such limitations, only the database will.
New in version 1.0.0.
Index compression
Oracle has a more efficient storage mode for indexes containing lots of repeated values. Use the oracle_compress
parameter to turn on key compression:
Index('my_index', my_table.c.data, oracle_compress=True)
Index('my_index', my_table.c.data1, my_table.c.data2, unique=True,
oracle_compress=1)
The oracle_compress
parameter accepts either an integer specifying the number of prefix columns to compress, or True
to use the default (all columns for non-unique indexes, all but the last column for unique indexes).
New in version 1.0.0.
Oracle Data Types
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with Oracle are importable from the top level dialect, whether they originate from sqlalchemy.types
or from the local dialect:
from sqlalchemy.dialects.oracle import \
BFILE, BLOB, CHAR, CLOB, DATE, \
DOUBLE_PRECISION, FLOAT, INTERVAL, LONG, NCLOB, NCHAR, \
NUMBER, NVARCHAR, NVARCHAR2, RAW, TIMESTAMP, VARCHAR, \
VARCHAR2
New in version 1.2.19: Added NCHAR
to the list of datatypes exported by the Oracle dialect.
Types which are specific to Oracle, or have Oracle-specific construction arguments, are as follows:
Object Name | Description |
---|---|
Provide the oracle DATE type. | |
class sqlalchemy.dialects.oracle.``BFILE
(length=None)
Class signature
class sqlalchemy.dialects.oracle.BFILE
(sqlalchemy.types.LargeBinary
)
method
sqlalchemy.dialects.oracle.BFILE.
__init__
(length=None)inherited from the
sqlalchemy.types.LargeBinary.__init__
method ofLargeBinary
Construct a LargeBinary type.
Parameters
length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.
class sqlalchemy.dialects.oracle.``DATE
(timezone=False)
Provide the oracle DATE type.
This type has no special Python behavior, except that it subclasses DateTime
; this is to suit the fact that the Oracle DATE
type supports a time value.
New in version 0.9.4.
Class signature
class sqlalchemy.dialects.oracle.DATE
(sqlalchemy.types.DateTime
)
method
sqlalchemy.dialects.oracle.DATE.
__init__
(timezone=False)inherited from the
sqlalchemy.types.DateTime.__init__
method ofDateTime
Construct a new
DateTime
.Parameters
timezone – boolean. Indicates that the datetime type should enable timezone support, if available on the base date/time-holding type only. It is recommended to make use of the
TIMESTAMP
datatype directly when using this flag, as some databases include separate generic date/time-holding types distinct from the timezone-capable TIMESTAMP datatype, such as Oracle.
class sqlalchemy.dialects.oracle.``DOUBLE_PRECISION
(precision=None, asdecimal=False, decimal_return_scale=None)
Class signature
class sqlalchemy.dialects.oracle.DOUBLE_PRECISION
(sqlalchemy.types.Float
)
method
sqlalchemy.dialects.oracle.DOUBLE_PRECISION.
__init__
(precision=None, asdecimal=False, decimal_return_scale=None)inherited from the
sqlalchemy.types.Float.__init__
method ofFloat
Construct a Float.
Parameters
precision – the numeric precision for use in DDL
CREATE TABLE
.asdecimal – the same flag as that of
Numeric
, but defaults toFalse
. Note that setting this flag toTrue
results in floating point conversion.decimal_return_scale –
Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.
New in version 0.9.0.
class sqlalchemy.dialects.oracle.``INTERVAL
(day_precision=None, second_precision=None)
Class signature
class sqlalchemy.dialects.oracle.INTERVAL
(sqlalchemy.types.NativeForEmulated
, sqlalchemy.types._AbstractInterval
)
method
sqlalchemy.dialects.oracle.INTERVAL.
__init__
(day_precision=None, second_precision=None)Construct an INTERVAL.
Note that only DAY TO SECOND intervals are currently supported. This is due to a lack of support for YEAR TO MONTH intervals within available DBAPIs.
Parameters
day_precision – the day precision value. this is the number of digits to store for the day field. Defaults to “2”
second_precision – the second precision value. this is the number of digits to store for the fractional seconds field. Defaults to “6”.
class sqlalchemy.dialects.oracle.``NCLOB
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False, _expect_unicode=False)
Class signature
class sqlalchemy.dialects.oracle.NCLOB
(sqlalchemy.types.Text
)
method
sqlalchemy.dialects.oracle.NCLOB.
__init__
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False, _expect_unicode=False)inherited from the
sqlalchemy.types.String.__init__
method ofString
Create a string-holding type.
Parameters
length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no
CREATE TABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception when theCREATE TABLE
DDL is issued if aVARCHAR
with no length is included. Whether the value is interpreted as bytes or characters is database specific.collation –
Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:
>>> from sqlalchemy import cast, select, String
>>> print(select(cast('some string', String(collation='utf8'))))
SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
convert_unicode –
When set to
True
, theString
type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of thecreate_engine.encoding
parameter parameter passed tocreate_engine()
as the encoding.Deprecated since version 1.3: The
String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value
"force"
can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.Note
SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the
String.convert_unicode
flag is inherently a legacy feature.Note
In the vast majority of cases, the
Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.See also
create_engine.convert_unicode
-Engine
-wide parameterunicode_error –
Optional, a method to use to handle Unicode conversion errors. Behaves like the
errors
keyword argument to the standard library’sstring.decode()
functions, requires thatString.convert_unicode
is set to"force"
Deprecated since version 1.3: The
String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
class sqlalchemy.dialects.oracle.``NUMBER
(precision=None, scale=None, asdecimal=None)
Class signature
class sqlalchemy.dialects.oracle.NUMBER
(sqlalchemy.types.Numeric
, sqlalchemy.types.Integer
)
method
sqlalchemy.dialects.oracle.NUMBER.
__init__
(precision=None, scale=None, asdecimal=None)Construct a Numeric.
Parameters
precision – the numeric precision for use in DDL
CREATE TABLE
.scale – the numeric scale for use in DDL
CREATE TABLE
.asdecimal – default True. Return whether or not values should be sent as Python Decimal objects, or as floats. Different DBAPIs send one or the other based on datatypes - the Numeric type will ensure that return values are one or the other across DBAPIs consistently.
decimal_return_scale –
Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Types which do include an explicit “.scale” value, such as the base
Numeric
as well as the MySQL float types, will use the value of “.scale” as the default for decimal_return_scale, if not otherwise specified.New in version 0.9.0.
When using the `Numeric` type, care should be taken to ensure that the asdecimal setting is appropriate for the DBAPI in use - when Numeric applies a conversion from Decimal->float or float-> Decimal, this conversion incurs an additional performance overhead for all result columns received.
DBAPIs that return Decimal natively (e.g. psycopg2) will have better accuracy and higher performance with a setting of `True`, as the native translation to Decimal reduces the amount of floating- point issues at play, and the Numeric type itself doesn’t need to apply any further conversions. However, another DBAPI which returns floats natively *will* incur an additional conversion overhead, and is still subject to floating point data loss - in which case `asdecimal=False` will at least remove the extra conversion overhead.
class sqlalchemy.dialects.oracle.``LONG
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False, _expect_unicode=False)
Class signature
class sqlalchemy.dialects.oracle.LONG
(sqlalchemy.types.Text
)
method
sqlalchemy.dialects.oracle.LONG.
__init__
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False, _expect_unicode=False)inherited from the
sqlalchemy.types.String.__init__
method ofString
Create a string-holding type.
Parameters
length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no
CREATE TABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception when theCREATE TABLE
DDL is issued if aVARCHAR
with no length is included. Whether the value is interpreted as bytes or characters is database specific.collation –
Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:
>>> from sqlalchemy import cast, select, String
>>> print(select(cast('some string', String(collation='utf8'))))
SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
convert_unicode –
When set to
True
, theString
type will assume that input is to be passed as Python Unicode objects under Python 2, and results returned as Python Unicode objects. In the rare circumstance that the DBAPI does not support Python unicode under Python 2, SQLAlchemy will use its own encoder/decoder functionality on strings, referring to the value of thecreate_engine.encoding
parameter parameter passed tocreate_engine()
as the encoding.Deprecated since version 1.3: The
String.convert_unicode
parameter is deprecated and will be removed in a future release. All modern DBAPIs now support Python Unicode directly and this parameter is unnecessary.For the extremely rare case that Python Unicode is to be encoded/decoded by SQLAlchemy on a backend that does natively support Python Unicode, the string value
"force"
can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally.Note
SQLAlchemy’s unicode-conversion flags and features only apply to Python 2; in Python 3, all string objects are Unicode objects. For this reason, as well as the fact that virtually all modern DBAPIs now support Unicode natively even under Python 2, the
String.convert_unicode
flag is inherently a legacy feature.Note
In the vast majority of cases, the
Unicode
orUnicodeText
datatypes should be used for aColumn
that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database side as well as set up the correct Unicode behaviors under Python 2.See also
create_engine.convert_unicode
-Engine
-wide parameterunicode_error –
Optional, a method to use to handle Unicode conversion errors. Behaves like the
errors
keyword argument to the standard library’sstring.decode()
functions, requires thatString.convert_unicode
is set to"force"
Deprecated since version 1.3: The
String.unicode_errors
parameter is deprecated and will be removed in a future release. This parameter is unnecessary for modern Python DBAPIs and degrades performance significantly.
class sqlalchemy.dialects.oracle.``RAW
(length=None)
Class signature
class sqlalchemy.dialects.oracle.RAW
(sqlalchemy.types._Binary
)
method
sqlalchemy.dialects.oracle.RAW.
__init__
(length=None)inherited from the
sqlalchemy.types._Binary.__init__
method ofsqlalchemy.types._Binary
Initialize self. See help(type(self)) for accurate signature.
cx_Oracle
Support for the Oracle database via the cx-Oracle driver.
DBAPI
Documentation and download information (if applicable) for cx-Oracle is available at: https://oracle.github.io/python-cx_Oracle/
Connecting
Connect String:
oracle+cx_oracle://user:pass@host:port/dbname[?key=value&key=value...]
DSN vs. Hostname connections
The dialect will connect to a DSN if no database name portion is presented, such as:
engine = create_engine("oracle+cx_oracle://scott:tiger@oracle1120/?encoding=UTF-8&nencoding=UTF-8")
Above, oracle1120
is passed to cx_Oracle as an Oracle datasource name.
Alternatively, if a database name is present, the cx_Oracle.makedsn()
function is used to create an ad-hoc “datasource” name assuming host and port:
engine = create_engine("oracle+cx_oracle://scott:tiger@hostname:1521/dbname?encoding=UTF-8&nencoding=UTF-8")
Above, the DSN would be created as follows:
>>> import cx_Oracle
>>> cx_Oracle.makedsn("hostname", 1521, sid="dbname")
'(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))(CONNECT_DATA=(SID=dbname)))'
The service_name
parameter, also consumed by cx_Oracle.makedsn()
, may be specified in the URL query string, e.g. ?service_name=my_service
.
Passing cx_Oracle connect arguments
Additional connection arguments can usually be passed via the URL query string; particular symbols like cx_Oracle.SYSDBA
are intercepted and converted to the correct symbol:
e = create_engine(
"oracle+cx_oracle://user:pass@dsn?encoding=UTF-8&nencoding=UTF-8&mode=SYSDBA&events=true")
Changed in version 1.3: the cx_oracle dialect now accepts all argument names within the URL string itself, to be passed to the cx_Oracle DBAPI. As was the case earlier but not correctly documented, the create_engine.connect_args
parameter also accepts all cx_Oracle DBAPI connect arguments.
To pass arguments directly to .connect()
without using the query string, use the create_engine.connect_args
dictionary. Any cx_Oracle parameter value and/or constant may be passed, such as:
import cx_Oracle
e = create_engine(
"oracle+cx_oracle://user:pass@dsn",
connect_args={
"encoding": "UTF-8",
"nencoding": "UTF-8",
"mode": cx_Oracle.SYSDBA,
"events": True
}
)
Options consumed by the SQLAlchemy cx_Oracle dialect outside of the driver
There are also options that are consumed by the SQLAlchemy cx_oracle dialect itself. These options are always passed directly to create_engine()
, such as:
e = create_engine(
"oracle+cx_oracle://user:pass@dsn", coerce_to_unicode=False)
The parameters accepted by the cx_oracle dialect are as follows:
arraysize
- set the cx_oracle.arraysize value on cursors, defaulted to 50. This setting is significant with cx_Oracle as the contents of LOB objects are only readable within a “live” row (e.g. within a batch of 50 rows).auto_convert_lobs
- defaults to True; See LOB Objects.coerce_to_unicode
- see Unicode for detail.coerce_to_decimal
- see Precision Numerics for detail.encoding_errors
- see Encoding Errors for detail.
Using cx_Oracle SessionPool
The cx_Oracle library provides its own connectivity services that may be used in place of SQLAlchemy’s pooling functionality. This can be achieved by using the create_engine.creator
parameter to provide a function that returns a new connection, along with setting create_engine.pool_class
to NullPool
to disable SQLAlchemy’s pooling:
import cx_Oracle
from sqlalchemy import create_engine
from sqlalchemy.pool import NullPool
pool = cx_Oracle.SessionPool(
user="scott", password="tiger", dsn="oracle1120",
min=2, max=5, increment=1, threaded=True
)
engine = create_engine("oracle://", creator=pool.acquire, poolclass=NullPool)
The above engine may then be used normally where cx_Oracle’s pool handles connection pooling:
with engine.connect() as conn:
print(conn.scalar("select 1 FROM dual"))
Unicode
As is the case for all DBAPIs under Python 3, all strings are inherently Unicode strings. Under Python 2, cx_Oracle also supports Python Unicode objects directly. In all cases however, the driver requires an explicit encoding configuration.
Ensuring the Correct Client Encoding
The long accepted standard for establishing client encoding for nearly all Oracle related software is via the NLS_LANG environment variable. cx_Oracle like most other Oracle drivers will use this environment variable as the source of its encoding configuration. The format of this variable is idiosyncratic; a typical value would be AMERICAN_AMERICA.AL32UTF8
.
The cx_Oracle driver also supports a programmatic alternative which is to pass the encoding
and nencoding
parameters directly to its .connect()
function. These can be present in the URL as follows:
engine = create_engine("oracle+cx_oracle://scott:tiger@oracle1120/?encoding=UTF-8&nencoding=UTF-8")
For the meaning of the encoding
and nencoding
parameters, please consult Characters Sets and National Language Support (NLS).
See also
Characters Sets and National Language Support (NLS) - in the cx_Oracle documentation.
Unicode-specific Column datatypes
The Core expression language handles unicode data by use of the Unicode
and UnicodeText
datatypes. These types correspond to the VARCHAR2 and CLOB Oracle datatypes by default. When using these datatypes with Unicode data, it is expected that the Oracle database is configured with a Unicode-aware character set, as well as that the NLS_LANG
environment variable is set appropriately, so that the VARCHAR2 and CLOB datatypes can accommodate the data.
In the case that the Oracle database is not configured with a Unicode character set, the two options are to use the NCHAR
and NCLOB
datatypes explicitly, or to pass the flag use_nchar_for_unicode=True
to create_engine()
, which will cause the SQLAlchemy dialect to use NCHAR/NCLOB for the Unicode
/ UnicodeText
datatypes instead of VARCHAR/CLOB.
Changed in version 1.3: The Unicode
and UnicodeText
datatypes now correspond to the VARCHAR2
and CLOB
Oracle datatypes unless the use_nchar_for_unicode=True
is passed to the dialect when create_engine()
is called.
Unicode Coercion of result rows under Python 2
When result sets are fetched that include strings, under Python 3 the cx_Oracle DBAPI returns all strings as Python Unicode objects, since Python 3 only has a Unicode string type. This occurs for data fetched from datatypes such as VARCHAR2, CHAR, CLOB, NCHAR, NCLOB, etc. In order to provide cross- compatibility under Python 2, the SQLAlchemy cx_Oracle dialect will add Unicode-conversion to string data under Python 2 as well. Historically, this made use of converters that were supplied by cx_Oracle but were found to be non-performant; SQLAlchemy’s own converters are used for the string to Unicode conversion under Python 2. To disable the Python 2 Unicode conversion for VARCHAR2, CHAR, and CLOB, the flag coerce_to_unicode=False
can be passed to create_engine()
.
Changed in version 1.3: Unicode conversion is applied to all string values by default under python 2. The coerce_to_unicode
now defaults to True and can be set to False to disable the Unicode coercion of strings that are delivered as VARCHAR2/CHAR/CLOB data.
Encoding Errors
For the unusual case that data in the Oracle database is present with a broken encoding, the dialect accepts a parameter encoding_errors
which will be passed to Unicode decoding functions in order to affect how decoding errors are handled. The value is ultimately consumed by the Python decode function, and is passed both via cx_Oracle’s encodingErrors
parameter consumed by Cursor.var()
, as well as SQLAlchemy’s own decoding function, as the cx_Oracle dialect makes use of both under different circumstances.
New in version 1.3.11.
Fine grained control over cx_Oracle data binding performance with setinputsizes
The cx_Oracle DBAPI has a deep and fundamental reliance upon the usage of the DBAPI setinputsizes()
call. The purpose of this call is to establish the datatypes that are bound to a SQL statement for Python values being passed as parameters. While virtually no other DBAPI assigns any use to the setinputsizes()
call, the cx_Oracle DBAPI relies upon it heavily in its interactions with the Oracle client interface, and in some scenarios it is not possible for SQLAlchemy to know exactly how data should be bound, as some settings can cause profoundly different performance characteristics, while altering the type coercion behavior at the same time.
Users of the cx_Oracle dialect are strongly encouraged to read through cx_Oracle’s list of built-in datatype symbols at http://cx-oracle.readthedocs.io/en/latest/module.html#database-types. Note that in some cases, significant performance degradation can occur when using these types vs. not, in particular when specifying cx_Oracle.CLOB
.
On the SQLAlchemy side, the DialectEvents.do_setinputsizes()
event can be used both for runtime visibility (e.g. logging) of the setinputsizes step as well as to fully control how setinputsizes()
is used on a per-statement basis.
New in version 1.2.9: Added DialectEvents.setinputsizes()
Example 1 - logging all setinputsizes calls
The following example illustrates how to log the intermediary values from a SQLAlchemy perspective before they are converted to the raw setinputsizes()
parameter dictionary. The keys of the dictionary are BindParameter
objects which have a .key
and a .type
attribute:
from sqlalchemy import create_engine, event
engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe")
@event.listens_for(engine, "do_setinputsizes")
def _log_setinputsizes(inputsizes, cursor, statement, parameters, context):
for bindparam, dbapitype in inputsizes.items():
log.info(
"Bound parameter name: %s SQLAlchemy type: %r "
"DBAPI object: %s",
bindparam.key, bindparam.type, dbapitype)
Example 2 - remove all bindings to CLOB
The CLOB
datatype in cx_Oracle incurs a significant performance overhead, however is set by default for the Text
type within the SQLAlchemy 1.2 series. This setting can be modified as follows:
from sqlalchemy import create_engine, event
from cx_Oracle import CLOB
engine = create_engine("oracle+cx_oracle://scott:tiger@host/xe")
@event.listens_for(engine, "do_setinputsizes")
def _remove_clob(inputsizes, cursor, statement, parameters, context):
for bindparam, dbapitype in list(inputsizes.items()):
if dbapitype is CLOB:
del inputsizes[bindparam]
RETURNING Support
The cx_Oracle dialect implements RETURNING using OUT parameters. The dialect supports RETURNING fully, however cx_Oracle 6 is recommended for complete support.
LOB Objects
cx_oracle returns oracle LOBs using the cx_oracle.LOB object. SQLAlchemy converts these to strings so that the interface of the Binary type is consistent with that of other backends, which takes place within a cx_Oracle outputtypehandler.
cx_Oracle prior to version 6 would require that LOB objects be read before a new batch of rows would be read, as determined by the cursor.arraysize
. As of the 6 series, this limitation has been lifted. Nevertheless, because SQLAlchemy pre-reads these LOBs up front, this issue is avoided in any case.
To disable the auto “read()” feature of the dialect, the flag auto_convert_lobs=False
may be passed to create_engine()
. Under the cx_Oracle 5 series, having this flag turned off means there is the chance of reading from a stale LOB object if not read as it is fetched. With cx_Oracle 6, this issue is resolved.
Changed in version 1.2: the LOB handling system has been greatly simplified internally to make use of outputtypehandlers, and no longer makes use of alternate “buffered” result set objects.
Two Phase Transactions Not Supported
Two phase transactions are not supported under cx_Oracle due to poor driver support. As of cx_Oracle 6.0b1, the interface for two phase transactions has been changed to be more of a direct pass-through to the underlying OCI layer with less automation. The additional logic to support this system is not implemented in SQLAlchemy.
Precision Numerics
SQLAlchemy’s numeric types can handle receiving and returning values as Python Decimal
objects or float objects. When a Numeric
object, or a subclass such as Float
, DOUBLE_PRECISION
etc. is in use, the Numeric.asdecimal
flag determines if values should be coerced to Decimal
upon return, or returned as float objects. To make matters more complicated under Oracle, Oracle’s NUMBER
type can also represent integer values if the “scale” is zero, so the Oracle-specific NUMBER
type takes this into account as well.
The cx_Oracle dialect makes extensive use of connection- and cursor-level “outputtypehandler” callables in order to coerce numeric values as requested. These callables are specific to the specific flavor of Numeric
in use, as well as if no SQLAlchemy typing objects are present. There are observed scenarios where Oracle may sends incomplete or ambiguous information about the numeric types being returned, such as a query where the numeric types are buried under multiple levels of subquery. The type handlers do their best to make the right decision in all cases, deferring to the underlying cx_Oracle DBAPI for all those cases where the driver can make the best decision.
When no typing objects are present, as when executing plain SQL strings, a default “outputtypehandler” is present which will generally return numeric values which specify precision and scale as Python Decimal
objects. To disable this coercion to decimal for performance reasons, pass the flag coerce_to_decimal=False
to create_engine()
:
engine = create_engine("oracle+cx_oracle://dsn", coerce_to_decimal=False)
The coerce_to_decimal
flag only impacts the results of plain string SQL statements that are not otherwise associated with a Numeric
SQLAlchemy type (or a subclass of such).
Changed in version 1.2: The numeric handling system for cx_Oracle has been reworked to take advantage of newer cx_Oracle features as well as better integration of outputtypehandlers.