- Tanzu Greenplum 5.29.x Release Notes
- Release 5.29.10
- Release 5.29.8
- Release 5.29.7
- Release 5.29.6
- Release 5.29.5
- Release 5.29.4
- Release 5.29.3
- Release 5.29.2
- Release 5.29.1
- Release 5.29.0
- Beta Features
- Deprecated Features
- Known Issues and Limitations
- Differences Compared to Open Source Greenplum Database
- Supported Platforms
- Tanzu Greenplum Tools and Extensions Compatibility
- Hadoop Distribution Compatibility
- Upgrading to Greenplum Database 5.29.x
- Migrating Data to Tanzu Greenplum 5.x
- Tanzu Greenplum on DCA Systems
- Update for gp_toolkit.gp_bloat_expected_pages Issue
- Update for gp_toolkit.gp_bloat_diag Issue
Tanzu Greenplum 5.29.x Release Notes
Tanzu Greenplum Database is a massively parallel processing (MPP) database server that supports next generation data warehousing and large-scale analytics processing. By automatically partitioning data and running parallel queries, it allows a cluster of servers to operate as a single database supercomputer performing tens or hundreds times faster than a traditional database. It supports SQL, MapReduce parallel processing, and data volumes ranging from hundreds of gigabytes, to hundreds of terabytes.
This document contains pertinent release information about Tanzu Greenplum Database 5.29.6. For previous versions of the release notes for Greenplum Database, go to Greenplum Database Documentation. For information about Greenplum Database end of life, see the Support Lifecycle Policy.
Tanzu Greenplum 5.x software is available for download from the Tanzu Greenplum page on Tanzu Network.
Tanzu Greenplum 5.x is based on the open source Greenplum Database project code.
Important: The Greenplum gpbackup
and gprestore
utilities are now distributed separately from Greenplum Database, and are updated independently of the core Greenplum Database server. These utilities will not be updated in future Greenplum Database 5.x releases. You can upgrade to the latest gpbackup
and gprestore
versions by downloading and installing the latest Greenplum Backup and Restore release from VMware Tanzu Network.
Important: Support does not provide support for open source versions of Greenplum Database. Only Greenplum Database is supported by Support.
Release 5.29.10
Release Date: 2022-10-28
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.10.
11948 - Server
Resolved a crash that could occur because Greenplum incorrectly allowed REINDEX TABLE
on a partitioned table from within a PL/SQL function. Because REINDEX TABLE
on partitioned table expands the table and starts a new transaction to reindex each table, the operation cannot be rolled back and may crash if called within PL/SQL. The code was modified to prevent this operation from running inside of a function.
12989 - Server
Resolved a problem where repeated errors could be reported during certain queries involving pg_locks
and a partitioned table.
14137 - Release Engineering
Merged several Python dependencies into the main Greenplum Database repository to simplify the build process.
32524 - Server
Resolved a problem where queries could not be canceled using pg_cancel_backend()
or pg_terminate_backend()
, because the server would lock up after a connection loss.
32428 - Data Flow
Resolved a problem that prevented certain SSL algorithms, such as ECDHE-RSA-AES256-GCM-SHA384
, from being applied during SSL handshake in libpq.
32408 - Server
Resolved a segmentation fault that could occur during a server restart after gp_before_filespace_setup
was enabled.
Release 5.29.8
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.8.
32397 - Server
Resolved a segfault that could occur in cases where the master segment dispatched internal parameters that contained non-initialized values. The problem was fixed by intercepting such parameters at the Query Executor before deserialization occurs.
32381 - Server
Greenplum 5.29.8 resolves Postgres CVE-2022-2625: Extension scripts replace objects not belonging to the extension.
Release 5.29.7
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.7.
32191 - Server
By default VACUUM FULL
cannot vacuum tuples in utility mode, because all tuples are considered to be “live” while in utility mode. Workaround: In cases where it is required to vacuum tuples in utility mode, set the configuration parameter gp_disable_dtx_visibility_check
to true
. This setting makes all tuples eligible for VACUUM
in utility mode.
32179, 31372, 9864 - Data Flow
Resolved a gpfdist
problem that could cause errors similar unknown meta type 1xx (url_curl.c:xxxx)
when loading data after upgrades to Tanzu Greenplum 6.
Release 5.29.6
Changed Features
Greenplum Database 5.29.6 includes these changes:
- The metrics collector included with Greenplum 5.29.6 was updated to support Greenplum Command Center 4.15.0. See the Greenplum Command Center 4.15 Documentation for more information.
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.6.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Greenplum page on VMware Tanzu Network.
181673873 - Query Processing
The gp_toolkit
function was yielding unexpected results because ORCA was relying on fallback plans to hide differences between ORCA and PLANNER; because the differences were hidden, exceptions weren’t being caught. This issue has been resolved.
181229489 - gpload
Resolves an issue where gpload
wasn’t respecting case-sensitivity enforcement through the use of double quotes for its staging table name and column names.
11215 - Server
Resolved a problem where partition elimination did not work inside a PL/pgSQL function. The problem occurred because PL/pgSQL used only the plan cached by the Postgres Planner to reevaluate prepared queries, instead of also considering query parameters. The problem was resolved for the Postgres Planner by ensuring that PL/pgSQL considers both the cached plan and parameters when reevaluating a plan.
32075 - Server
Resolved an issue where a duplicated call in pgstat_read_statsfile()
would hit the file allocation limit and result in an error similar to too many private files demanded
. The issue has been resolved by removing the duplicate call.
32132 - Server
Resolved a problem where certain Greenplum Command Center metrics queries failed to consider the possibility of a NULL value for a database name, which could result in a PANIC and segment failure. The problem was resolved by adding a check to account for NULL database name values.
Release 5.29.5
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.5.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Greenplum page on VMware Tanzu Network.
32102, 8012 - gpfdist
Resolved an issue where gpfdist
would fail to load some data in a .gz
data file if the file contained multiple end-of-file (EOF) flags. gpfdist
code was modified so that it continues reading from a .gz
data file until it reaches the actual EOF flag.
32080 - gpload
Resolves an issue where gpload
, when REUSE_TABLES: true
, could not find a staging table to reuse, and failed to create the staging table in the public
schema. This issue is resolved; gpload
now creates staging tables in the same schema as specified by EXTERNAL: SCHEMA:
.
32064 - Query Processing
For queries defined inside a user-defined function (UDF), GPORCA tries to evaluate the predicate for static partition elimination on the partitioned table when it executes the parent query. This requires GPORCA to plan, optimize, and execute statements inside the UDF. However, GPORCA does not support nested optimization calls, and this activity caused crashes when a nested optimization request was required. These problem was resolved by ensuring that GPORCA falls back to the Postgres planner when nested optimization requests are required.
Release 5.29.4
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.4.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Greenplum page on VMware Tanzu Network.
31936, 31913 - Data Flow
In certain cases, an INSERT
operation performed on a gpfdist
external writable table both did not write the complete data set and reported no error. This issue is resolved; gpfdist
now correctly reports and completes error processing when it encounters a retry failure due to poor network conditions.
Release 5.29.3
Resolved Issues
The listed issues are resolved in Greenplum Database 5.29.3.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Greenplum page on VMware Tanzu Network.
31884 - Query Optimizer, Server:Execution
Resolves an issue where the Query Optimizer generated a plan for a query that included a subquery ALL construct that could generate incorrect results or crash during execution.
9649 - Server
Resolves an issue where a PANIC
error occurred intermittently when a query waiting on a resource queue was canceled.
Release 5.29.2
Changed Features
Greenplum Database 5.29.2 includes these changes:
Greenplum Streaming Server (GPSS) version 1.4.3 is included, which includes changes and bug fixes. Refer to the GPSS Release Notes for more information on release content and to access the GPSS documentation.
Note: If you have previously used GPSS in your Greenplum 5.x installation, you may be required to perform upgrade actions as described in Upgrading the Streaming Server.
The PXF version 6.2.1 distribution is available with this release; you can download it from the Tanzu Greenplum release version 5.29.2 Release Download directory named Greenplum Platform Extension Framework on VMware Network. Refer to the PXF documentation for information about this release and for installation instructions.
The metrics collector included with Greenplum 5.29.2 was updated to support Greenplum Command Center 4.14.0, which resolves several issues. Greenplum Command Center versions prior to version 4.14.0 are not supported with Greenplum 5.29.2 and later releases.
Greenplum Text version 3.8.1 is available with this release; you can download it from the Tanzu Greenplum release version 5.29.2 Release Download directory named Greenplum Advanced Analytics on VMware Tanzu Network. Refer to the Greenplum Text Documentation.
Resolved Issues
The listed issues are resolved in Tanzu Greenplum Database 5.29.2.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Tanzu Greenplum page on Tanzu Network.
Postgres CVE fixes
This release backports the following Postgres CVE fixes:
- CVE-2021-23214: Server processes unencrypted bytes from man-in-the-middle.
- CVE-2021-23222: libpq processes unencrypted bytes from man-in-the-middle.
572 - Greenplum Installation
Resolved an issue with the rpm
installers, so installing Greenplum does not overwrite the symlink if the install file is for a different major version.
31768 - Data Flow
Resolved an issue where gpload
> was reporting the error No such file or directory",,,,,,"CREATE EXTENSION IF NOT EXISTS dataflow;
when connecting to a Greenplum Database 5.x. This fix checks the Greenplum version to determine whether it should create a new extension.
31887 - Server
Resolves a resource queue issue where a session with multiple active portals did not decrement the active statement count following a deadlock report or statement cancellation.
31896 - Optimizer
Using set operators (the EXCEPT
clause) could cause crashes during query execution. This occurred because GPORCA did not add required scalar casts for input columns when the types did not match the output types of the set operation. GPORCA was modified to add the required scalar casts.
Release 5.29.1
Resolved Issues
The listed issues are resolved in Tanzu Greenplum Database 5.29.1.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Tanzu Greenplum page on Tanzu Network.
31825 - Server
Certain relfiles were incorrectly zeroed out after running a full gprecoverseg
, caused by the resync manager skipping these relfiles. Logging has been improved in order to find out the root cause of this issue.
31817 - Server
This fix introduces a new GUC gp_enable_drop_key_constraint_child_partition
wihch provides a way to drop the primary/unique key directly from child partitions.
31789 - Server
Enabling the GUC log_hostnames
with gpconfig
generated WARNING log messages. This issue is now resolved.
179159922 - Server
Greenplum Database would initialize the pg_aocsseg
table entries with frozen tuples to ensure these entries were implicitly visible even after a rollback. This strategy created issues with the roll back of Append Optimized Columnar (AOC) tables. This issue has now been resolved.
Release 5.29.0
New Features
Greenplum Database 5.29.0 includes these new features:
Greenplum 5.29.0 introduces a new Query Optimizer server configuration parameter, optimizer_xform_bind_threshold. You can use this parameter to reduce the optimization time and overall memory usage of queries that include deeply nested expressions by specifying the maximum number of bindings per transform that GPORCA produces per group expression.
The
gpload
utility was updated to use the same code and feature set that is included withgpload
in Tanzu Greenplum 6.x releases. This update adds the--max_retries
option to specify the number of times the utility attempts to connect to Greenplum Database after a connection timeout. The default value, 0, does not attempt a connection after a timeout.This version of the
gpload
also provides additional error information for cases where staging tables cannot be created.The license file for the Windows Client and Loader Tools Package was updated to the latest version.
Resolved Issues
The listed issues are resolved in Tanzu Greenplum Database 5.29.0.
For issues resolved in prior 5.x releases, refer to the corresponding release notes. Release notes are available from the Tanzu Greenplum page on Tanzu Network.
31736 - Resource Queues
Due to improper error handling, Greenplum Database raised a duplicate portal identifier warning that was in some cases immediately followed by an out of shared memory error. This issue is resolved; Greenplum Database now raises distinct errors for duplicate portal identifier and out of shared memory.
31727, 31736 - Server
When the log_lock_waits
GUC was enabled it resulted in spurious deadlock reports and orphaned wait queue states which, in turn, could lead to memory corruption of certain internal tables. This issue can be resolved by disabling the log_lock_waits
GUC.
31725 - Server
In some cases, Greenplum Database generated a PANIC when the user cancelled a query on an AO table due to a double free of a visimap object. This issue is resolved.
31708 - Catalog and Metadata
Resolves an issue where a non-superuser who ran VACUUM FULL
on a table that they had no permission to access could block further access to the table by currently running transactions. Greenplum Database now performs the permission check before it acquires a lock on the table.
31617 - Server
A database instance was failing to start up, with the message Command pg_ctl reports Master gdm instance active
because a mirror segment couldn’t be recovered. This was due to the incorrect type of lock being used when creating or altering a resource group. This issue is resolved.
31466 - gpload
Resolved an issue where gpload
would fail with column names that used uppercase or mixed-case characters. gpload
now automatically adds double quotes to column names that are not already quoted in the YAML control file.
Beta Features
Because Tanzu Greenplum Database is based on the open source Greenplum Database project code, it includes several Beta features to allow interested developers to experiment with their use on development systems. Feedback will help drive development of these features, and they may become supported in future versions of the product.
Warning: Beta features are not supported for production deployments.
Greenplum Database 5.29.6 includes these Beta features:
GPORCA cost model for bitmap indexes.
GPORCA algorithm for calculating the scale factor for join queries.
Recursive
WITH
Queries (Common Table Expressions). See WITH Queries (Common Table Expressions) in the Tanzu Greenplum Database Documentation.Resource groups remain a Beta feature only on the SuSE 11 platform, due to limited cgroups functionality in the kernel.
SuSE 12 resolves the Linux cgroup issues that caused the performance degradation when Greenplum Database resource groups are enabled.
Deprecated Features
Deprecated features will be removed in a future major release of Greenplum Database. Tanzu Greenplum 5.x deprecates:
The
--skip_root_stats
option toanalyzedb
(deprecated since 5.18).If the option is specified, a warning is issued stating that the option will be ignored.
The
gptransfer
utility (deprecated since 5.17).The utility copies objects between Greenplum Database systems. The gpcopy utility provides
gptransfer
functionality.The
gphdfs
external table protocol (deprecated since 5.17).Consider using the Greenplum Platform Extension Framework (PXF)
pxf
external table protocol to access data stored in an external Hadoop file system. Refer to Accessing External Data with PXF for more information.The server configuration parameter
gp_max_csv_line_length
(deprecated since 5.11).For data in a CSV formatted file, the parameter controls the maximum allowed line length that can be imported into the system).
The server configuration parameter
gp_unix_socket_directory
(deprecated since 5.9).Note: Do not change the value of this parameter. The default location is required for Greenplum Database utilities.
Support for Data Domain Boost 3.0.0.3 (deprecated since 5.2).
The DELL EMC end of Primary Support date is December 31, 2017.
These unused catalog tables (deprecated since 5.1):
gp_configuration
gp_db_interfaces
gp_interfaces
The
gpcrondump
andgpdbrestore
utilities (deprecated since 5.0).The
gpcheck
utility (deprecated since 5.0).
Known Issues and Limitations
Tanzu Greenplum 5.x has these limitations:
- Upgrading a Greenplum Database 4.3.x release to Tanzu Greenplum 5.x is not supported. See Migrating Data to Tanzu Greenplum 5.x.
- Some features are works-in-progress and are considered to be Beta features. VMware does not support using Beta features in a production environment. See Beta Features.
- Greenplum Database 4.3.x packages are not compatible with Tanzu Greenplum 5.x.
The following table lists key known issues in Tanzu Greenplum 5.x.
Issue | Category | Description |
---|---|---|
31881 | gpload | gpload returns the error:when it erroneously tries to register the dataflow extension. Ignore the error. |
11143 | gpinitsystem | If a stale .gphostcache file exists in a user’s home directory, then gpinitsystem fails with an error similar to: [FATAL]:-Unable to contact <name>: ping: cannot resolve <name>: Unknown host <name>: getaddrinfo — nodename nor servname provided, or not known Script Exiting! Workaround: Delete the |
n/a | RPM Installation | When you use yum or rpm to upgrade to Greenplum Database 5.28.0 or later, a bug in the older RPM packaging removes the symbolic link (for example, /usr/local/greenplum-db ) that would normally point to the version-specific directory. This problem does not occur for new installations, or when upgrading from 5.28.x to a later version.Workaround: After first upgrading to 5.28.0 or later, manually create the symbolic link to point to the new version directory. For example, if you installed Greenplum to the default directory:
|
30537 | Postgres Planner | The Postgres Planner generates a very large query plan that causes out of memory issues for the following type of CTE (common table expression) query: the WITH clause of the CTE contains a partitioned table with a large number partitions, and the WITH reference is used in a subquery that joins another partitioned table.Workaround: If possible, use the GPORCA query optimizer. With the server configuration parameter |
30420 | Postgres Planner | Greenplum Database 5 generates a PANIC for some queries that use the aggregate function percentile_cont() .Workaround: Setting the server configuration parameter Note: The issue does not occur in Greenplum Database 6. Also, the server configuration parameter gp_idf_deduplicate has been removed in Greenplum Database 6. |
N/A | PXF | PXF is available only for supported Red Hat and CentOS platforms. PXF is not available for supported SuSE platforms. |
9460 | CREATE UNIQUE INDEX | When you create a unique index on a partitioned table, Greenplum Database does not check to ensure that the index contains the table partition keys, which are required to enforce uniqueness. Workaround: To avoid duplicate rows, manually specify partition keys when executing |
3290 | JSON | The to_json() function is not implemented as a callable function. Attempting to call the function results in an error. For example:
Workaround: Greenplum Database invokes |
29064 | Storage: DDL | The money data type accepts out-of-range values as negative values, and no error message is displayed.Workaround: Use only in-range values for the |
29139 | DML | In some cases for an append-optimized partitioned table, Greenplum Database acquires a ROW EXCLUSIVE lock on all leaf partitions of the table when inserting data directly into one of the leaf partitions of the table. The locks are acquired the first time Greenplum Database performs validation on the leaf partitions. When inserting data into one leaf partition, the locks are not acquired on the other leaf partitions as long as the validation information remains in memory.The issue does not occur for heap-storage partitioned tables. |
29246 | gpconfig | When querying the gp_enable_gpperfmon server configuration parameter with gpconfig -s gp_enable_gpperfmon , gpconfig always reports off , even when the parameter has been set to on correctly, and the gpmmon and gpsmon agent processes are running.Workaround: To see if |
29351 | gptransfer | The gptransfer utility can copy a data row with a maximum length of 256 MB. |
29395 | DDL | The gpdbrestore or gprestore utility fails when the utility attempts to restore a table from a backup and the table is incorrectly defined with duplicate columns as distribution keys. The issue is caused when the gpcrondump or gpbackup utility backed up a table that is incorrectly defined. The CREATE TABLE AS command could create a table that is incorrectly defined with a distribution policy that contains duplicate columns as distribution keys.The |
29485 | Catalog and Metadata | When a session creates temporary objects in a database, Greenplum Database might not the drop temporary objects when the session ends if the session terminates abnormally or is terminated from an administrator command. |
29496 | gpconfig | For a small number of server configuration parameters such as log_min_messages , the command gpconfig -s <config_param> does not display the correct value of the parameter for the segment hosts when the value of the parameter on master is different than the value on the segments.For parameters with the set classification For a few parameters such as Workaround: To display the parameter value specified in the |
29523 | gptoolkit | An upgrade between minor releases does not update the template0 database, and in some cases, using these views in the gp_toolkit schema might cause issues if you create a database using template0 as the template database after you upgrade to Greenplum Database 5.11.0 or later.
For example, the issues might occur if you upgrade a Greenplum Database system from 5.3.0 or an earlier 5.x release and then run a Workaround: You can update the views in the |
29674 | VACUUM | Performing parallel VACUUM operations on a catalog table such as pg_class , gp_relation_node , or pg_type and another table causes a deadlock and blocks connections to the database.Workaround: Avoid performing parallel |
29699 | ANALYZE | In Greenplum Database 5.15.1 and earlier 5.x releases, an ANALYZE command might return a error that states target lists can have at most 1664 entries when performing an ANALYZE operation on a table with a large number of columns (more than 800 columns). The error occurs because the in-memory sample table created by ANALYZE requires an additional column to indicate whether a column is NULL or is a truncated column for each variable length column being analyzed (such as varchar , text , and bpchar , numeric, arrays, and geometric datatype columns). The error is returned when ANALYZE attempts to create a sample table and the number of columns (table columns and indicator columns) exceeds the maximum number of columns allowed.In Greenplum Database 5.16.0 and later 5.x releases, the Workaround: To collect statistics on the table, perform |
29766 | VACUUM | A long-running catalog query can block VACUUM operations on the system catalog until the query completes or is canceled. This type of blocking cannot be observed using pg_locks , and the VACUUM operation itself cannot be canceled until the long-running query completes or is canceled. |
29917 | Segment Mirroring | A “read beyond eof” error has been observed with certain persistent tables during full recovery. The root cause of this problem has not yet been determined. Greenplum Database version 5.21 contains additional debug logging to help in determining the cause of this problem. The additional logging is enabled by default, and adds approximately 646 bytes to each persistent table entry file. If you want to disable the additional debug logging, set the debug_filerep_config_print configuration parameter “false.” |
30180 | Locking, Signals, Processes | The pg_cancel_backend() and pg_terminate_backend() functions might leave some orphan processes when they are used to cancel a running VACUUM command.Workaround: You can stop the orphan processes by restarting the Greenplum Database system. |
30207 | Catalog and Metadata | Defining a unique index on an empty table that is defined with a If the table is not empty, an error is returned. |
148119917 | Resource Groups | Testing of the resource groups feature has found that a kernel panic can occur when using the default kernel in RHEL/CentOS system. The problem occurs due to a problem in the kernel cgroups implementation, and results in a kernel panic backtrace similar to:
Workaround: Upgrade to the latest-available kernel for your Red Hat or CentOS release to avoid the above system panic. |
149789783 | Resource Groups | Significant Tanzu Greenplum performance degradation has been observed when enabling resource group-based workload management on Red Hat 6.x, CentOS 6.x, and SuSE 11 systems. This issue is caused by a Linux cgroup kernel bug. This kernel bug has been fixed in CentOS 7.x and Red Hat 7.x systems. When resource groups are enabled on systems with an affected kernel, there can be a delay of 1 second or longer when starting a transaction or a query. The delay is caused by a Linux cgroup kernel bug where a synchronization mechanism called The issue causes single attachment operations to take longer and also causes all concurrent attachments to be executed in sequence. For example, one process attachment could take about 0.01 second. When concurrently attaching 100 processes, the fastest process attachment takes 0.01 second and the slowest takes about 1 second. Tanzu Greenplum performs process attachments when a transaction or queries are started. So the performance degradation is dependent on concurrent started transactions or queries, and not related to concurrent running queries. Also Tanzu Greenplum has optimizations to bypass the rewriting when a QE is reused by multiple queries in the same session. Workaround: This bug does not affect CentOS 7.x and Red Hat 7.x systems. If you use Red Hat 6 and the performance with resource groups is acceptable for your use case, upgrade your kernel to version 2.6.32-696 or higher to benefit from other fixes to the cgroups implementation. SuSE 11 does not have a kernel version that resolves this issue; resource groups are still considered to be a Beta feature on this platform. Resource groups are not supported on SuSE 11 for production use. |
150906510 | Backup and Restore | Greenplum Database 4.3.15.0 and later backups contain the following line in the backup files:
However, Greenplum Database 5.0.0 does not have a parameter named
Also, the report file may contain the error:
These warnings and errors do not affect the restoration procedure, and can be ignored. |
151135629 | COPY command | When the ON SEGMENT clause is specified, the COPY command does not support specifying a SELECT statement in the COPY TO clause. For example, this command is not supported.
|
158011506 | Catalog and Metadata | In some cases, the timezone used by Greenplum Database might be different than the host system timezone, or the Greenplum Database timezone set by a user. In some rare cases, times used and displayed by Greenplum Database might be slightly different than the host system time. The timezone used by Greenplum Database is selected from a set of internally stored PostgreSQL timezones. Greenplum Database selects the timezone by matching a PostgreSQL timezone with the user specified time zone, or the host system time zone. For example, when selecting a default timezone, Greenplum Database uses an algorithm to select a PostgreSQL timezones based on the host system timezone. If the system timezone includes leap second information, Greenplum Database cannot match the system timezone with a PostgreSQL timezone. Greenplum Database calculates a best match with a PostgreSQL timezone based on information from the host system. Workaround: Set the Greenplum Database and host system timezones to a timezone that is supported by both Greenplum Database and the host system. For example, you can show and set the Greenplum Database timezone with the
You must restart Greenplum Database after changing the timezone. The command The Greenplum Database catalog view |
162317340 | Client Tools | On Tanzu Network in the file listings for Greenplum Database releases between 5.7.1 and 5.14.0, the Greenplum Database AIX Client Tools download file is incorrectly labeled as Loaders for AIX 7. The file you download is the correct AIX 7 Client Tools file. |
163807792 | gpbackup/ gprestore | When the % sign was specified as the delimeter in an external table text format, gpbackup escaped the % sign incorrectly in the CREATE EXTERNAL TABLE command. This has been resolved. The % sign is correctly escaped. |
164671144 | gpssh-exkeys | The
Through testing and investigation, VMware has determined that these vulnerabilities do not affect Greenplum Database, and no actions are required for existing Greenplum Database 4.3 or 5.x releases. However, there may be additional unidentified vulnerabilities in the PyCrypto library, and users who install a later version of PyCrypto could be exposed to other vulnerabilities. The PyCrypto library will be removed from Greenplum Database 6.0. Workaround: Administrators can set up passwordless SSH between hosts in the Greenplum Database cluster without using the
When adding new hosts to the Greenplum Database system, you must create a new SSH key for each new host and exchange keys between the existing hosts and new hosts. |
165434975 | search_path | An identified PostgreSQL security vulnerability (https://nvd.nist.gov/vuln/detail/CVE-2018-1058) also exists in Greenplum Database. The problem centers around the default public schema and how Greenplum Database uses the search_path setting. The ability to create objects with the same names in different schemas, combined with how Greenplum Database searches for objects within schemas, presents an opportunity for a user to modify the behavior of a query for other users. For example, a malicious user could insert a trojan-horse function that, when executed by a superuser, grants escalated privileges to the malicious user.There are methods to protect from this vulnerability. See A Guide to CVE-2018-1058: Protect Your Search Path on the PostgreSQL wiki for a full explanation of the vulnerability and the steps you can take to protect your data. |
168142530 | Backup and Restore | Backups created on Greenplum Database versions before 4.3.33.0 or 5.1.19.0 may fail to restore to Greenplum Database versions 4.3.33.0 or 5.1.19.0 or later. In Greenplum Database 4.3.33.0 and 5.1.19.0, a check was introduced to ensure that the distribution key for a table is equal to the primary key or is a left-subset of the primary key. If you add a primary key to a table that contains no data, Greenplum Database automatically updates the distribution key to match the primary key. The index key for any unique index on a table must also match or be a left-subset of the distribution key. Earlier Greenplum Database versions did not enforce these policies. Restoring a table from an older backup that has a different distribution key causes errors because the backup data file on each segment contains data that was distributed using the original distribution key. Restoring a unique index with a key that does not match the distribution key will fail with an error when attempting to create the index. This issue affects the |
168548176 | gpbackup | When using gpbackup to back up a Greenplum Database 5.7.1 or earlier 5.x release with resource groups enabled, gpbackup returns a column not found error for t6.value AS memoryauditor . |
168957894 | PXF | The PXF Hive Connector does not support using the Hive profiles to access Hive transactional tables.Workaround: Use the PXF JDBC Connector to access Hive. |
169052763 | gprestore | You can create a full backup of a database with gpbackup using the —with-stats option to back up table statistics. However, when you try to restore only some of the tables and the statistics for the tables using gprestore with a table filter option and the —with-stats option, gprestore attempts to restore all the table statistics from the backup, not just the statistics for the tables being restored.When restoring all the table statistics, if a table is not in the target database, |
26675 | gpcrondump | During the transition from Daylight Saving Time to Standard Time, this sequence of events might cause a gpcrondump backup operation to fail.If an initial backup is taken between 1:00AM and 2:00AM Daylight Saving Time, and a second backup is taken between 1:00AM and 2:00AM Standard Time, the second backup might fail if the first backup has a timestamp newer than the second. VMware recommends performing only a single backup between the hours of 1:00AM and 2:00AM on the days when the time changes:
If the failure scenario is encountered, it can be remedied by restarting the backup operation after 2:00AM Standard Time. |
Differences Compared to Open Source Greenplum Database
Tanzu Greenplum 5.x includes all of the functionality in the open source Greenplum Database project and adds:
- Product packaging and installation script
- Support for QuickLZ compression. QuickLZ compression is not provided in the open source version of Greenplum Database due to licensing restrictions.
- Support for managing Greenplum Database using Tanzu Greenplum Command Center
- Support for full text search and text analysis using Tanzu GPText
- Support for data connectors:
- Greenplum-Spark Connector
- Greenplum-Informatica Connector
- Greenplum-Kafka Integration
- Gemfire-Greenplum Connector
- Greenplum Streaming Server
- Data Direct ODBC/JDBC Drivers
gpcopy
utility for copying or migrating objects between Greenplum systems- Greenplum backup plugin for DD Boost
- Backup/restore storage plugin API
Supported Platforms
Tanzu Greenplum 5.29.6 runs on the following platforms:
- Red Hat Enterprise Linux 64-bit 7.x (See the following Note)
- Red Hat Enterprise Linux 64-bit 6.x (See the following Note)
- SuSE Linux Enterprise Server 64-bit 12 SP2 and SP3 with kernel version greater than 4.4.73-5. (See the following Note)
- SuSE Linux Enterprise Server 64-bit 11 SP4 (See the following Note)
- CentOS 64-bit 7.x
- CentOS 64-bit 6.x (See the following Note)
- Oracle Linux 64-bit 7.4, using the Red Hat Compatible Kernel (RHCK)
Note: For the supported Linux operating systems, Tanzu Greenplum Database is supported on system hosts using either AMD or Intel CPUs based on the x86-64 architecture. VMware recommends using a homogeneous set of hardware (system hosts) in a Greenplum Database system.
Important: Significant Greenplum Database performance degradation has been observed when enabling resource group-based workload management on Red Hat 6.x, CentOS 6.x, and SuSE 11 systems. This issue is caused by a Linux cgroup kernel bug. This kernel bug has been fixed in CentOS 7.x and Red Hat 7.x systems.
If you use Red Hat 6 and the performance with resource groups is acceptable for your use case, upgrade your kernel to version 2.6.32-696 or higher to benefit from other fixes to the cgroups implementation.
SuSE 11 does not have a kernel version that resolves this issue; resource groups are still considered to be a Beta feature on this platform. Resource groups are not supported on SuSE 11 for production use. See known issue 149789783.
Tanzu Greenplum on SuSE 12 supports resource groups for production use.SuSE 12 resolves the Linux cgroup kernel issues that caused the performance degradation when Greenplum Database resource groups are enabled.
Note: For Greenplum Database that is installed on Red Hat Enterprise Linux 7.x or CentOS 7.x prior to 7.3, an operating system issue might cause Greenplum Database that is running large workloads to hang in the workload. The Greenplum Database issue is caused by Linux kernel bugs.
RHEL 7.3 and CentOS 7.3 resolves the issue.
Note: Greenplum Database on SuSE Linux Enterprise systems does not support these features.
- The PL/Perl procedural language
- The
gpmapreduce
tool - The PL/Container language extension
- The Greenplum Platform Extension Framework (PXF)
Note: PL/Container is not supported on RHEL/CentOS 6.x systems, because those platforms do not officially support Docker.
Greenplum Database support on Dell EMC DCA.
- Tanzu Greenplum Database 5.29.6 is supported on DCA systems that are running DCA software version 3.4 or greater.
- Only Tanzu Greenplum Database is supported on DCA systems. Open source versions of Greenplum Database are not supported.
- FIPS is supported on DCA software version 3.4 and greater with Tanzu Greenplum Database 5.2.0 and greater.
Note: These Greenplum Database releases are not certified on DCA because of an incompatibility in configuring timezone information.
5.5.0, 5.6.0, 5.6.1, 5.7.0, 5.8.0
These Greenplum Database releases are certified on DCA.
5.7.1, 5.8.1, 5.9.0 and later releases, and 5.x releases prior to 5.5.0.
Tanzu Greenplum 5.29.6 supports these Java versions:
- 8.xxx
- 7.xxx
Greenplum Database 5.29.6 software that runs on Linux systems uses OpenSSL 1.0.2l (with FIPS 2.0.16), cURL 7.54, OpenLDAP 2.4.44, and Python 2.7.12.
Greenplum Database client software that runs on Windows and AIX systems uses OpenSSL 0.9.8zg.
The Greenplum Database s3
external table protocol supports these data sources:
- Amazon Simple Storage Service (Amazon S3)
- Dell EMC Elastic Cloud Storage (ECS), an Amazon S3 compatible service
The gpbackup
and gprestore
utilities support using Dell EMC Data Domain Boost software with the DD Boost Storage Plugin. See Data Domain Boost in the VMware Greenplum Backup and Restore documentation.
Note: Tanzu Greenplum 5.29.6 does not support the ODBC driver for Cognos Analytics V11.
Connecting to IBM Cognos software with an ODBC driver is not supported. Greenplum Database supports connecting to IBM Cognos software with the DataDirect JDBC driver for Tanzu Greenplum. This driver is available as a download from Tanzu Network.
Veritas NetBackup
Tanzu Greenplum 5.29.6 supports backup with Veritas NetBackup version 7.7.3. See Backing Up Databases with Veritas NetBackup.
Supported Platform Notes
The following notes describe platform support for Tanzu Greenplum. Please send any questions or comments to Tanzu Support at https://tanzu.vmware.com/support.
Tanzu Greenplum is supported using either IPV4 or IPV6 protocols.
The only file system supported for running Greenplum Database is the XFS file system. All other file systems are explicitly not supported by VMware.
Greenplum Database is supported on network or shared storage if the shared storage is presented as a block device to the servers running Greenplum Database and the XFS file system is mounted on the block device. Network file systems are not supported. When using network or shared storage, Greenplum Database mirroring must be used in the same way as with local storage, and no modifications may be made to the mirroring scheme or the recovery scheme of the segments. Other features of the shared storage such as de-duplication and/or replication are not directly supported by Tanzu Greenplum Database, but may be used with support of the storage vendor as long as they do not interfere with the expected operation of Greenplum Database at the discretion of VMware.
Greenplum Database is supported when running on virtualized systems, as long as the storage is presented as block devices and the XFS file system is mounted for the storage of the segment directories.
A minimum of 10-gigabit network is required for a system configuration to be supported by VMware.
Greenplum Database is supported on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Compute (GCP).
AWS - For production workloads, r4.8xlarge and r4.16xlarge instance types with four 12TB ST1 EBS volumes for each segment host, or d2.8xlarge with ephemeral storage configured with 4 RAID 0 volumes, are supported. EBS storage is recommended. EBS storage is more reliable and provides more features than ephemeral storage. Note that Amazon has no provisions to replace a bad ephemeral drive; when a disk failure occurs, you must replace the node with the bad disk.
VMware recommends using an Auto Scaling Group (ASG) to provision nodes in AWS. An ASG automatically replaces bad nodes, and you can add further automation to recover the Greenplum processes on the new nodes automatically.
Deployments should be in a Placement Group within a single Availability Zone. Because Amazon recommends using the same instance type in a Placement Group, use a single instance type for all nodes, including the masters.
Azure - For production workloads, VMware recommends configuring Standard_H8 instance type with 4 2TB disks and 2 segments per host and recommend using 8 2TB disks and 4 segments per host with Standard_H16 instance type. Standard_H16 uses 8 2TB disks and 4 segments per host. This means software RAID 0 is required so that the number of volumes do not exceed the number of segments.
For Azure deployments, you must also configure the Greenplum Database system to not use port 65330. Add the following line to the
sysctl.conf
file on all Greenplum Database hosts.$net.ipv4.ip_local_reserved_ports=65330
GCP - For all workloads, n1-standard-8 and n1-highmem-8 are supported which are relatively small instance types. This is because of the disk performance in GCP forces the configuration to have just 2 segments per host but with many hosts to scale. Use pd-standard disks and the size of the disk is recommended to be 6 TB. For performance perspective, use a factor of 8 when determining how many nodes to deploy in GCP, so a 16 segment host cluster in AWS would require 128 nodes in GCP.
For Red Hat Enterprise Linux 7.2 or CentOS 7.2, the default
systemd
settingRemoveIPC=yes
removes IPC connections when non-system users logout. This causes the Greenplum Database utilitygpinitsystem
to fail with semaphore errors. To avoid this issue, see “Setting the Greenplum Recommended OS Parameters” in the Greenplum Database Installation Guide.
Tanzu Greenplum Tools and Extensions Compatibility
- Client Tools
- Extensions
- Tanzu Greenplum Data Connectors
- Tanzu GPText Compatibility
- Tanzu Greenplum Command Center
Client Tools
Greenplum releases a number of client tool packages on various platforms that can be used to connect to Greenplum Database and the Greenplum Command Center management tool. The following table describes the compatibility of these packages with this Greenplum Database release.
Tool packages are available from Tanzu Network.
Tool | Description of Contents | Tool Version(s) | Server Version(s) |
---|---|---|---|
Tanzu Greenplum Clients | Greenplum Database Command-Line Interface (psql) | 5.8 | 5.x |
Tanzu Greenplum Loaders | Greenplum Database Parallel Data Loading Tools (gpfdist, gpload) | 5.8 | 5.x |
Greenplum Command Center | Greenplum Database management tool | 4.14.0 | 5.29.2 and later |
Greenplum Workload Manager1 | Greenplum Database query monitoring and management tool | 1.8.0 | 5.0.0 |
The Greenplum Database Client Tools and Load Tools are supported on the following platforms:
- AIX 7.2 (64-bit) (Client and Load Tools only)2
- Red Hat Enterprise Linux x86_64 7.x (RHEL 7)
- Red Hat Enterprise Linux x86_64 6.x (RHEL 6)
- SuSE Linux Enterprise Server x86_64 SLES 11 SP4, or SLES 12 SP2/SP3
- Windows 10 (32-bit and 64-bit)
- Windows 8 (32-bit and 64-bit)
- Windows Server 2012 (32-bit and 64-bit)
- Windows Server 2012 R2 (32-bit and 64-bit)
- Windows Server 2008 R2 (32-bit and 64-bit)
Note: 1For Greenplum Command Center 4.0.0 and later, workload management is an integrated Command Center feature rather than the separate tool Greenplum Workload Manager.
2For Greenplum Database 5.4.1 and earlier 5.x releases, download the AIX Client and Load Tools package either from the Greenplum Database 5.11.1 file collection or the Greenplum Database 5.0.0 file collection on Tanzu Network.
Extensions
Greenplum Extension | Versions |
---|---|
MADlib machine learning for Greenplum Database 5.x1 | MADlib 1.17, 1.16, 1.15.1, 1.15, 1.14 |
PL/Java for Greenplum Database 5.x | PL/Java 1.4.32 |
PL/R for Greenplum Database 5.x | 2.3.3 |
PostGIS Spatial and Geographic Objects for Greenplum Database 5.x | 2.1.5+pivotal.2 |
Python Data Science Module Package for Greenplum Database 5.x3 | 1.1.1, 1.1.0, 1.0.0 |
R Data Science Library Package for Greenplum Database 5.x4 | 1.0.1, 1.0.0 |
PL/Container for Greenplum Database 5.x | 1.15, 1.26, 1.3, 1.4, 1.5, 1.6 |
Note: 1VMware recommends that you upgrade to the most recent version of MADlib. For information about MADlib support and upgrade information, see the MADlib FAQ. For information on installing the MADlib extension in Greenplum Database, see Greenplum MADlib Extension for Analytics in the Greenplum Database Reference Guide.
2The PL/Java extension package version 1.4.3 is compatible only with Greenplum Database 5.11.0 and later, it is not compatible with 5.10.x or earlier. If you are upgrading from Greenplum Database 5.10.x or earlier and have installed PL/Java 1.4.2, you must upgrade the PL/Java extension to version 1.4.3.
3For information about the Python package, including the modules provided, see the Python Data Science Module Package in the Greenplum Database Documentation.
4For information about the R package, including the libraries provided, see the R Data Science Library Package in the Greenplum Database Documentation.
5To upgrade from PL/Container 1.0 to PL/Container 1.1 and later, you must drop the PL/Container 1.0 language before registering the new version of PL/Container. For information on upgrading the PL/Container extension in Greenplum Database, see PL/Container Language Extension in the Greenplum Database Reference Guide.
6PL/Container version 1.2 can utilize the resource group capabilities that were introduced in Greenplum Database 5.8.0. If you downgrade to a Greenplum Database system that uses PL/Container 1.1 or earlier, you must use plcontainer runtime-edit
to remove any resource_group_id
settings from the PL/Container runtime configuration file. See Upgrading from PL/Container 1.1.
These Greenplum Database extensions are installed with Greenplum Database
- Fuzzy String Match Extension
- PL/Python Extension
- pgcrypto Extension
Tanzu Greenplum Data Connectors
Greenplum Platform Extension Framework (PXF) - PXF, integrated with Greenplum Database, provides access to HDFS, Hive, HBase, and SQL external data stores. Refer to Accessing External Data with PXF in the Greenplum Database Administrator Guide for PXF configuration and usage information.
Note: PXF is available only for supported Red Hat and CentOS platforms. PXF is not available for supported SuSE platforms.
Greenplum-Spark Connector - The Tanzu Greenplum-Spark Connector supports high speed, parallel data transfer from Greenplum Database to an Apache Spark cluster. The Greenplum-Spark Connector is available as a separate download from Tanzu Network. Refer to the Greenplum-Spark Connector documentation for compatibility and usage information.
Greenplum-Informatica Connector - The Tanzu Greenplum-Informatica connector supports high speed data transfer from an Informatica PowerCenter cluster to a Greenplum Database cluster for batch and streaming ETL operations. See the Greenplum-Informatica Connector Documentation.
Greenplum-Kafka Integration - The Greenplum-Kafka Integration provides high speed, parallel data transfer from a Kafka cluster to a Tanzu Greenplum Database cluster for batch and streaming ETL operations. Refer to the Greenplum-Kafka Integration Documentation for more information about this feature.
Greenplum Streaming Server - The Tanzu Greenplum Streaming Server is an ETL tool that provides high speed, parallel data transfer from Informatica, Kafka, and custom client data sources to a Greenplum Database cluster. Refer to the Tanzu Greenplum Streaming Server Documentation for more information about this feature.
Gemfire-Greenplum Connector - The Tanzu GemFire-Greenplum Connector supports the transfer of data between a GemFire region and a Greenplum Database cluster. The GemFire-Greenplum Connector is available as a separate download from Tanzu Network. Refer to the GemFire-Greenplum Connector documentation for compatibility and usage information.
Tanzu GPText Compatibility
Tanzu Greenplum Database 5.29.6 is compatible with Tanzu GPText version 2.1.3 and later.
Greenplum Command Center
See the Greenplum Command Center documentation for GPCC and Greenplum Workload Manager compatibility information, see the Greenplum Command Center 3.x and 2.x Release Notes.
Note: For Greenplum Command Center 4.0.0 and later, workload management is an integrated Command Center feature rather than the separate tool Greenplum Workload Manager.
Hadoop Distribution Compatibility
Greenplum Database provides access to HDFS with gphdfs
and the Greenplum Platform Extension Framework (PXF).
PXF Hadoop Distribution Compatibility
PXF can use Cloudera, Hortonworks Data Platform, MapR, and generic Apache Hadoop distributions. PXF bundles all of the JAR files on which it depends, and includes and supports the following Hadoop library versions:
PXF Version | Hadoop Version | Hive Server Version | HBase Server Version |
---|---|---|---|
5.10, 5.11, 5.12, 5.13, 5.14 | 2.x, 3.1+ | 1.x, 2.x, 3.1+ | 1.3.2 |
<= 5.8.2 | 2.x | 1.x | 1.3.2 |
If you plan to access JSON format data stored in a Cloudera Hadoop cluster, PXF requires a Cloudera version 5.8 or later Hadoop distribution.
gphdfs Hadoop Distribution Compatibility
The supported Hadoop distributions for gphdfs
are listed below:
Hadoop Distribution | Version | gphadoop target_version |
---|---|---|
Cloudera | CDH 5.x | cdh |
Hortonworks Data Platform | HDP 2.x | hdp |
MapR | MapR 4.x, MapR 5.x | mpr |
Apache Hadoop | 2.x | hadoop |
Note: MapR requires the MapR client.
Upgrading to Greenplum Database 5.29.x
The upgrade path supported for this release is Greenplum Database 5.x to Greenplum Database 5.29.6. Upgrading a Greenplum Database 4.3.x release to Tanzu Greenplum 5.x is not supported. See Migrating Data to Tanzu Greenplum 5.x.
Note: If you are upgrading Greenplum Database on a DCA system, see Tanzu Greenplum on DCA Systems.
Important: VMware recommends that customers set the Greenplum Database timezone to a value that is compatible with their host systems. Setting the Greenplum Database timezone prevents Greenplum Database from selecting a timezone each time the cluster is restarted and sets the timezone for the Greenplum Database master and segment instances. After you upgrade to this release and if you have not set a Greenplum Database timezone value, verify that the selected Greenplum Database timezone is acceptable for your deployment. See Configuring Timezone and Localization Settings for more information.
Prerequisites
Before starting the upgrade process, VMware recommends performing the following checks.
Verify the health of the Greenplum Database host hardware, and verify that the hosts meet the requirements for running Greenplum Database. The Greenplum Database
gpcheckperf
utility can assist you in confirming the host requirements.Note: If you need to run the
gpcheckcat
utility, VMware recommends running it a few weeks before the upgrade and that you rungpcheckcat
during a maintenance period. If necessary, you can resolve any issues found by the utility before the scheduled upgrade.The utility is in
$GPHOME/bin
. recommends that Greenplum Database be in restricted mode when you rungpcheckcat
utility. See the Greenplum Database Utility Guide for information about thegpcheckcat
utility.If
gpcheckcat
reports catalog inconsistencies, you can rungpcheckcat
with the-g
option to generate SQL scripts to fix the inconsistencies.After you run the SQL scripts, run
gpcheckcat
again. You might need to repeat the process of runninggpcheckcat
and creating SQL scripts to ensure that there are no inconsistencies. VMware recommends that the SQL scripts generated bygpcheckcat
be run on a quiescent system. The utility might report false alerts if there is activity on the system.Important: If the
gpcheckcat
utility reports errors, but does not generate a SQL script to fix the errors, contact Tanzu support. Information for contacting Tanzu Support is at https://tanzu.vmware.com/support.During the migration process from Greenplum Database 5.0.0, a backup is made of some files and directories in
$MASTER_DATA_DIRECTORY
. VMware recommends that files and directories that are not used by Greenplum Database be backed up, if necessary, and removed from the$MASTER_DATA_DIRECTORY
before migration. For information about the Greenplum Database migration utilities, see the Greenplum Database Documentation.
For information about supported versions of Greenplum Database extensions, see Tanzu Greenplum Tools and Extensions Compatibility.
Pre-Upgrade Actions
Perform the following pre-upgrade actions if applicable to your Greenplum configuration:
If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading to Greenplum Database 5.29.6 as follows:
gpcrondump --ddboost-host ddboost\_hostname --ddboost-user ddboost\_user
--ddboost-backupdir backup\_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.
If you have configured the Greenplum Platform Extension Framework (PXF) in your previous Greenplum Database installation, you must stop the PXF service, and you might need to back up PXF configuration files before upgrading to a new version of Greenplum Database. Refer to PXF Pre-Upgrade Actions for instructions.
If you do not plan to use PXF, or you have not yet configured PXF, no action is necessary.
If you have configured and used the Greenplum Streaming Server (GPSS) in your previous Greenplum Database installation, you must stop any running GPSS jobs and service instances before you upgrade to a new version of Greenplum Database. Refer to GPSS Pre-Upgrade Actions for instructions.
If you do not plan to use GPSS, or you have not yet configured GPSS, no action is necessary.
Upgrading from 5.x to 5.29.6
An upgrade from 5.x to 5.29.6 involves stopping Greenplum Database, updating the Greenplum Database software binaries, upgrading and restarting Greenplum Database. If you are using Greenplum Database extension packages there are additional requirements. See Prerequisites in the previous section.
Note: If you are upgrading from Greenplum Database 5.10.x or earlier and have installed the PL/Java extension, you must upgrade the PL/Java extension to extension package version 1.4.3. Previous releases of the PL/Java extension are not compatible with Greenplum Database 5.11.0 and later. For information about the PL/Java extension package, see Tanzu Greenplum Tools and Extensions Compatibility.
Note: If you have databases that were created with Greenplum Database 5.10.x or an earlier 5.x release, upgrade the gp_bloat_expected_pages
view in the gp_toolkit
schema. For information about the issue and how check a database for the issue, see Update for gp_toolkit.gp_bloat_expected_pages Issue.
Note: If you are upgrading from Greenplum Database 5.7.0 or an earlier 5.x release and have configured PgBouncer in your Greenplum Database installation, you must migrate to the new PgBouncer when you upgrade Greenplum Database. Refer to Migrating PgBouncer for specific migration instructions.
Note: If you have databases that were created with Greenplum Database 5.3.0 or an earlier 5.x release, upgrade the gp_bloat_diag
function and view in the gp_toolkit
schema. For information about the issue and how check a database for the issue, see Update for gp_toolkit.gp_bloat_diag Issue.
Note: If the Greenplum Command Center database gpperfmon
is installed in your Greenplum Database system, the migration process changes the distribution key of the Greenplum Database log_alert_* tables to the logtime
column. The redistribution of the table data might take some time the first time you start Greenplum Database after migration. The change occurs only the first time you start Greenplum Database after a migration.
Log in to your Greenplum Database master host as the Greenplum administrative user:
$ su - gpadmin
Perform a smart shutdown of your current Greenplum Database 5.x system (there can be no active connections to the database). This example uses the
-a
option to disable confirmation prompts:$ gpstop -a
If you installed the earlier Greenplum Database 5.x using the binary installer:
Download and run the binary installer for Greenplum Database 5.29.6 on the Greenplum Database master host.
When prompted, choose an installation location in the same base directory as your current installation. For example, if you installed to the default location of
/usr/local
then install version 5.29.6 into:/usr/local/greenplum-db-5.29.6
Run the
gpseginstall
utility to install the 5.29.6 binaries on all the segment hosts specified in the hostfile. For example:$ gpseginstall -f hostfile
Note: The
gpseginstall
utility copies the installed files from the current host to the remote hosts. It does not useyum
orrpm
to install Greenplum Database on the remote hosts, even if you used one of those utilities to install Greenplum Database on the current host. Use the following step if you installed a Greenplum RPM package instead of the using binary installer.
If you installed the earlier Greenplum Database 5.x using the RPM package:
Download the RPM installer for Greenplum Database 5.29.6 and copy it to the Greenplum Database master host, standby host, and all segment hosts.
If you used
yum
to install Greenplum Database to the default location, execute this command on each host to upgrade to the new software release:$ sudo yum upgrade ./greenplum-db-5.29.6-<platform>.rpm
If you instead used
rpm
to install Greenplum Database to a non-default location, executerpm
on each host to upgrade to the new software release and specify the same custom installation directory with the--prefix
option. For example:$ sudo rpm -U ./greenplum-db-5.29.6-<platform>.rpm --prefix=<directory>
Update the permissions for the new installation. For example, run this command as
root
to change user and group of the installed files togpadmin
.# chown -R gpadmin:gpadmin /usr/local/greenplum*
Replace
/usr/local
with your custom installation directory if you installed to a non-default directory.
If needed, update the
greenplum_path.sh
file for use with your specific installation. These are some examples.If Greenplum Database uses LDAP authentication, edit the
greenplum_path.sh
file to add the line:export LDAPCONF=/etc/openldap/ldap.conf
If Greenplum Database uses PL/Java, you might need to set or update the environment variables
JAVA_HOME
andLD_LIBRARY_PATH
ingreenplum_path.sh
.
Note: When comparing the previous and new
greenplum_path.sh
files, be aware that installing some Greenplum Database extensions also updates thegreenplum_path.sh
file. Thegreenplum_path.sh
from the previous release might contain updates that were the result of those extensions. See step 9 for installing Greenplum Database extensions.Edit the environment of the Greenplum Database superuser (
gpadmin
) and make sure you are sourcing thegreenplum_path.sh
file for the new installation. For example change the following line in.bashrc
or your chosen profile file:source /usr/local/greenplum-db-5.0.0/greenplum_path.sh
to:
source /usr/local/greenplum-db-5.29.6/greenplum_path.sh
Or if you are sourcing a symbolic link (
/usr/local/greenplum-db
) in your profile files, update the link to point to the newly installed version. For example:$ rm /usr/local/greenplum-db
$ ln -s /usr/local/greenplum-db-5.29.6 /usr/local/greenplum-db
Source the environment file you just edited. For example:
$ source ~/.bashrc
Use the Greenplum Database
gppkg
utility to install Greenplum Database extensions. If you were previously using any Greenplum Database extensions such as pgcrypto, PL/R, PL/Java, PL/Perl, and PostGIS, download the corresponding packages from Tanzu Network, and install using this utility. See the Greenplum Database Documentation forgppkg
usage details.Also copy any additional files that are used by the extensions (such as JAR files, shared object files, and libraries) from the previous version installation directory to the new version installation directory on the master and segment host systems.
If you are upgrading from Greenplum Database 5.7 or an earlier 5.x release and have configured PgBouncer in your Greenplum Database installation, you must migrate to the new PgBouncer when you upgrade Greenplum Database. Refer to Migrating PgBouncer for specific migration instructions.
After all segment hosts have been upgraded, you can log in as the
gpadmin
user and restart your Greenplum Database system:# su - gpadmin
$ gpstart
If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading from Greenplum Database to 5.29.6 as follows:
gpcrondump --ddboost-host ddboost\_hostname --ddboost-user ddboost\_user
--ddboost-backupdir backup\_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.
If you configured PXF in your previous Greenplum Database installation, you must re-initialize the PXF service after you upgrade Greenplum Database. Refer to Upgrading PXF for instructions.
If you configured GPSS in your previous Greenplum Database installation, you may be required to perform some upgrade actions, and you must re-restart the GPSS service instances and jobs. Refer to Upgrading GPSS for instructions.
After upgrading Greenplum Database, ensure features work as expected. For example, you should test that backup and restore perform as expected, and Greenplum Database features such as user-defined functions, and extensions such as MADlib and PostGIS perform as expected.
Troubleshooting a Failed Upgrade
If you experience issues during the migration process and have active entitlements for Greenplum Database that were purchased through Tanzu, contact Tanzu Support. Information for contacting Tanzu Support is at https://tanzu.vmware.com/support.
Be prepared to provide the following information:
- A completed Upgrade Procedure.
- Log output from
gpcheckcat
(located in~/gpAdminLogs
)
Migrating Data to Tanzu Greenplum 5.x
Upgrading a Greenplum Database 4.x system directly to Tanzu Greenplum Database 5.x is not supported.
You can migrate existing data to Greenplum Database 5.x using standard backup and restore procedures (gpcrondump
and gpdbrestore
) or by using gptransfer
. he gpcopy
utility can be used to migrate data from Greenplum Database 4.3.26 or later to 5.9 or later.
Follow these general guidelines for migrating data:
Make sure that you have a complete backup of all data in the Greenplum Database 4.3.x cluster, and that you can successfully restore the Greenplum Database 4.3.x cluster if necessary.
You must install and initialize a new Greenplum Database 5.x cluster using the version 5.x
gpinitsystem
utility.Note: Unless you modify file locations manually,
gpdbrestore
only supports restoring data to a cluster that has an identical number of hosts and an identical number of segments per host, with each segment having the samecontent_id
as the segment in the original cluster. If you initialize the Greenplum Database 5.x cluster using a configuration that is different from the version 4.3 cluster, then follow the steps outlined in Restoring to a Different Greenplum System Configuration to manually update the file locations.Important: For Greenplum Database 5.x, VMware recommends that customers set the Greenplum Database timezone to a value that is compatible with the host systems. Setting the Greenplum Database timezone prevents Greenplum Database from selecting a timezone each time the cluster is restarted and sets the timezone for the Greenplum Database master and segment instances. See Configuring Timezone and Localization Settings for more information.
If you intend to install Greenplum Database 5.x on the same hardware as your 4.3.x system, you will need enough disk space to accommodate over 5 times the original data set (2 full copies of the primary and mirror data sets, plus the original backup data in ASCII format) in order to migrate data with
gpcrondump
andgpdbrestore
. Keep in mind that the ASCII backup data will require more disk space than the original data, which may be stored in compressed binary format. Offline backup solutions such as Dell EMC Data Domain or Veritas NetBackup can reduce the required disk space on each host.If you attempt to migrate your data on the same hardware but run out of free space,
gpcopy
provides the--truncate-source-after
option to truncate each source table after copying the table to the destination cluster and validating the copy succeeded. This reduces the amount of free space needed to migrate clusters that reside on the same hardware. See Migrating Data with gpcopy for more information.Use the version 5.x
gpdbrestore
utility to load the 4.3.x backup data into the new cluster.If the Greenplum Database 5.x cluster resides on separate hardware from the 4.3.x cluster, and the clusters have different numbers of segments, you can optionally use the version 5.x
gptransfer
utility to migrate the 4.3.x data. You must initiate thegptransfer
operation from the version 5.x cluster, pulling the older data into the newer system.On a Greenplum Database system with FIPS enabled, validating table data with MD5 (specifying the
gptransfer
option--validate=md5
) is not available. Use the optionsha256
to validate table data.Validating table data with SHA-256 (specifying the option
--validate=sha256
) requires the Greenplum Database pgcrypto extension. The extension is included with Tanzu Greenplum 5.x. The extension package must be installed on supported Tanzu Greenplum 4.3.x systems. Support for pgcrypto functions in a Greenplum 4.3.x database is not required.Greenplum Database 5.x removes automatic implicit casts between the text type and other data types. After you migrate from Greenplum Database version 4.3.x to version 5.x, this change in behavior may impact existing applications and queries. Refer to About Implicit Text Casting in Greenplum Database in the Greenplum Database Installation Guide for information, including a discussion about supported and unsupported workarounds.
After migrating data you may need to modify SQL scripts, administration scripts, and user-defined functions as necessary to account for changes in Greenplum Database version 5.x. Look for Upgrade Action Required entries in the Tanzu Greenplum 5.0.0 Release Notes for features that may necessitate post-migration tasks.
If you are migrating from Greenplum Database 4.3.27 or an earlier 4.3.x release and have configured PgBouncer in your Greenplum Database installation, you must migrate to the new PgBouncer when you upgrade Greenplum Database. Refer to Migrating PgBouncer for specific migration instructions.
Tanzu Greenplum on DCA Systems
On supported Dell EMC DCA systems, you can install Tanzu Greenplum 5.29.6, or you can upgrade from Tanzu Greenplum 5.x to 5.29.6.
Only Tanzu Greenplum Database is supported on DCA systems. Open source versions of Greenplum Database are not supported.
- Installing the Tanzu Greenplum 5.29.6 Software Binaries on DCA Systems
- Upgrading from 5.x to 5.29.6 on DCA Systems
Important: Upgrading Greenplum Database 4.3.x to Tanzu Greenplum 5.29.6 is not supported. See Migrating Data to Tanzu Greenplum 5.x.
Note: These Greenplum Database releases are not certified on DCA because of an incompatibility in configuring timezone information.
5.5.0, 5.6.0, 5.6.1, 5.7.0, 5.8.0
These Greenplum Database releases are certified on DCA.
5.7.1, 5.8.1, 5.9.0 and later releases, and 5.x releases prior to 5.5.0.
Installing the Tanzu Greenplum 5.29.6 Software Binaries on DCA Systems
Important: This section is for installing Tanzu Greenplum 5.29.6 only on DCA systems. Also, see the information on the DELL EMC support site (requires login).
For information about installing Tanzu Greenplum on non-DCA systems, see the Greenplum Database Installation Guide.
Prerequisites
Ensure your DCA system supports Tanzu Greenplum 5.29.6. See Supported Platforms.
Ensure Greenplum Database 4.3.x is not installed on your system.
Installing Tanzu Greenplum 5.29.6 on a DCA system with an existing Greenplum Database 4.3.x installation is not supported. For information about uninstalling Greenplum Database software, see your Dell EMC DCA documentation.
Installing Tanzu Greenplum 5.29.6
Download or copy the Greenplum Database DCA installer file
greenplum-db-appliance-5.29.6-RHEL6-x86_64.bin
to the Greenplum Database master host.As root, run the DCA installer for 5.29.6 on the Greenplum Database master host and specify the file
hostfile
that lists all hosts in the cluster, one host name per line. If necessary, copyhostfile
to the directory containing the installer before running the installer.This example command runs the installer for Greenplum Database 5.29.6.
# ./greenplum-db-appliance-5.29.6-RHEL6-x86_64.bin hostfile
Upgrading from 5.x to 5.29.6 on DCA Systems
Upgrading Tanzu Greenplum from 5.x to 5.29.6 on a Dell EMC DCA system involves stopping Greenplum Database, updating the Greenplum Database software binaries, and restarting Greenplum Database.
Important: This section is only for upgrading to Tanzu Greenplum 5.29.6 on DCA systems. For information about upgrading on non-DCA systems, see Upgrading to Greenplum Database 5.29.6.
Note: If you are upgrading from Greenplum Database 5.10.x or earlier and have installed the PL/Java extension, you must upgrade the PL/Java extension to extension package version 1.4.3. Previous releases of the PL/Java extension are not compatible with Greenplum Database 5.11.0 and later. For information about the PL/Java extension package, see Tanzu Greenplum Tools and Extensions Compatibility.
Note: If you have databases that were created with Greenplum Database 5.10.x or an earlier 5.x release, upgrade the gp_bloat_expected_pages
view in the gp_toolkit
schema. For information about the issue and how check a database for the issue, see Update for gp_toolkit.gp_bloat_expected_pages Issue.
Note: If you are upgrading from Greenplum Database 5.7.0 or an earlier 5.x release and have configured PgBouncer in your Greenplum Database installation, you must migrate to the new PgBouncer when you upgrade Greenplum Database. Refer to Migrating PgBouncer for specific migration instructions.
Note: If you have databases that were created with Greenplum Database 5.3.0 or an earlier 5.x release, upgrade the gp_bloat_diag
function and view in the gp_toolkit
schema. For information about the issue and how check a database for the issue, see Update for gp_toolkit.gp_bloat_diag Issue.
Log in to your Greenplum Database master host as the Greenplum administrative user (
gpadmin
):# su - gpadmin
Download or copy the installer file
greenplum-db-appliance-5.29.6-RHEL6-x86_64.bin
to the Greenplum Database master host.Perform a smart shutdown of your current Greenplum Database 5.x system (there can be no active connections to the database). This example uses the
-a
option to disable confirmation prompts:$ gpstop -a
As root, run the Greenplum Database DCA installer for 5.29.6 on the Greenplum Database master host and specify the file
hostfile
that lists all hosts in the cluster. If necessary, copyhostfile
to the directory containing the installer before running the installer.This example command runs the installer for Greenplum Database 5.29.6 for Red Hat Enterprise Linux 6.x.
# ./greenplum-db-appliance-5.29.6-RHEL6-x86_64.bin hostfile
The file
hostfile
is a text file that lists all hosts in the cluster, one host name per line.If needed, update the
greenplum_path.sh
file for use with your specific installation. These are some examples.If Greenplum Database uses LDAP authentication, edit the
greenplum_path.sh
file to add the line:export LDAPCONF=/etc/openldap/ldap.conf
If Greenplum Database uses PL/Java, you might need to set or update the environment variables
JAVA_HOME
andLD_LIBRARY_PATH
ingreenplum_path.sh
.
Note: When comparing the previous and new
greenplum_path.sh
files, be aware that installing some Greenplum Database extensions also updates thegreenplum_path.sh
file. Thegreenplum_path.sh
from the previous release might contain updates that were the result of those extensions. See step 6 for installing Greenplum Database extensions.Install Greenplum Database extension packages. For information about installing a Greenplum Database extension package, see
gppkg
in the Greenplum Database Utility Guide.Also migrate any additional files that are used by the extensions (such as JAR files, shared object files, and libraries) from the previous version installation directory to the new version installation directory.
After all segment hosts have been upgraded, you can log in as the
gpadmin
user and restart your Greenplum Database system:# su - gpadmin
$ gpstart
If you are utilizing Data Domain Boost, you have to re-enter your DD Boost credentials after upgrading to Greenplum Database 5.29.6 as follows:
gpcrondump --ddboost-host ddboost\_hostname --ddboost-user ddboost\_user
--ddboost-backupdir backup\_directory
Note: If you do not reenter your login credentials after an upgrade, your backup will never start because the Greenplum Database cannot connect to the Data Domain system. You will receive an error advising you to check your login credentials.
After upgrading Greenplum Database, ensure features work as expected. For example, you should test that backup and restore perform as expected, and Greenplum Database features such as user-defined functions, and extensions such as MADlib and PostGIS perform as expected.
Update for gp_toolkit.gp_bloat_expected_pages Issue
In Greenplum Database 5.10.x and earlier 5.x releases, the Greenplum Database view gp_toolkit.gp_bloat_expected_pages
view might incorrectly report that a root partition table is bloated even though root partition tables do not contain data. This information could cause a user to run a VACUUM FULL
operation on the partitioned table when the operation was not required. The issue was resolved in Greenplum Database 5.11.0 (resolved issue 29523) .
When updating Greenplum Database, the gp_toolkit.gp_bloat_expected_pages
view must be updated in databases created with a Greenplum Database 5.10.x or an earlier 5.x release. This issue has been fixed in databases created with Greenplum Database 5.11.0 and later. For information about using template0
as the template database after upgrading from Greenplum Database 5.10.x or an earlier 5.x release, see known issue 29523.
To check whether the gp_toolkit.gp_bloat_expected_pages
view in a database requires an update, run the psql
command \d+
to display the view definition.
\d+ gp_toolkit.gp_bloat_expected_pages
The updated view definition contains this predicate.
AND NOT EXISTS
( SELECT parrelid
FROM pg_partition
WHERE parrelid = pgc.oid )
Perform the following steps as the gpadmin user to update the view on each database that was created with Greenplum Database 5.11.0 or an earlier 5.x release.
Copy the script into a text file on the Greenplum Database master.
Run the script on each database that requires the update.
This example updates
gp_toolkit.gp_bloat_expected_pages
view in the databasemytest
and assumes that the script is in thegp_bloat_expected_pages
in thegpadmin
home directory.psql -f /home/gpadmin/gp_bloat_expected_pages.sql -d mytest
Run the script during a low activity period. Running the script during a high activity period does not affect database functionality but might affect performance.
Script to Update gp_toolkit.gp_bloat_expected_pages
View
BEGIN;
CREATE OR REPLACE VIEW gp_toolkit.gp_bloat_expected_pages
AS
SELECT
btdrelid,
btdrelpages,
CASE WHEN btdexppages < numsegments
THEN numsegments
ELSE btdexppages
END as btdexppages
FROM
( SELECT
oid as btdrelid,
pgc.relpages as btdrelpages,
CEIL((pgc.reltuples * (25 + width))::numeric / current_setting('block_size')::numeric) AS btdexppages,
(SELECT numsegments FROM gp_toolkit.__gp_number_of_segments) AS numsegments
FROM
( SELECT pgc.oid, pgc.reltuples, pgc.relpages
FROM pg_class pgc
WHERE NOT EXISTS
( SELECT iaooid
FROM gp_toolkit.__gp_is_append_only
WHERE iaooid = pgc.oid AND iaotype = 't' )
AND NOT EXISTS
( SELECT parrelid
FROM pg_partition
WHERE parrelid = pgc.oid )) AS pgc
LEFT OUTER JOIN
( SELECT starelid, SUM(stawidth * (1.0 - stanullfrac)) AS width
FROM pg_statistic pgs
GROUP BY 1) AS btwcols
ON pgc.oid = btwcols.starelid
WHERE starelid IS NOT NULL) AS subq;
GRANT SELECT ON TABLE gp_toolkit.gp_bloat_expected_pages TO public;
COMMIT;
Update for gp_toolkit.gp_bloat_diag Issue
In Greenplum Database 5.3.0 or an earlier 5.x release, Greenplum Database returned an integer out of range error in some cases when performing a query against the gp_toolkit.gp_bloat_diag
view. The issue was resolved in Greenplum Database 5.4.0 (resolved issue 26518) .
When updating Greenplum Database, the gp_toolkit.gp_bloat_diag
function and view must be updated in databases created with a Greenplum Database 5.3.0 or an earlier 5.x release. This issue has been fixed in databases created with Greenplum Database 5.4.0 and later. For information about upgrading from Greenplum Database 5.3.0 or an earlier 5.x release and then using template0
as the template database, see known issue 29523.
To check whether the gp_toolkit.gp_bloat_diag
function and view in a database requires an update, run the psql
command \df
to display information about the gp_toolkit.gp_bloat_diag
function.
\df gp_toolkit.gp_bloat_diag
If the data type for btdexppages
is integer
, an update is required. If the data type is numeric
an update is not required. In this example, the btdexppages
data type is integer
and requires an update.
List of functions
-[ RECORD 1 ]-------+------------------------------------------------------------------------------------------------
Schema | gp_toolkit
Name | gp_bloat_diag
Result data type | record
Argument data types | btdrelpages integer, **btdexppages integer**, aotable boolean, OUT bltidx integer, OUT bltdiag text
Type | normal
Perform the following steps as the gpadmin user to update the function and view to fix the issue on each database that was created with Greenplum Database 5.3.0 or an earlier 5.x release.
Copy the script into a text file on the Greenplum Database master.
Run the script on each database that requires the update.
This example updates
gp_toolkit.gp_bloat_diag
function and view in the databasemytest
and assumes that the script is in theupdate_bloat_diag.sql
in thegpadmin
home directory.psql -f /home/gpadmin/update_bloat_diag.sql -d mytest
Run the script during a low activity period. Running the script during a high activity period does not affect database functionality but might affect performance.
Script to Update gp_toolkit.gp_bloat_diag
Function and View
BEGIN;
CREATE OR REPLACE FUNCTION gp_toolkit.gp_bloat_diag(btdrelpages int, btdexppages numeric, aotable bool,
OUT bltidx int, OUT bltdiag text)
AS
$$
SELECT
bloatidx,
CASE
WHEN bloatidx = 0
THEN 'no bloat detected'::text
WHEN bloatidx = 1
THEN 'moderate amount of bloat suspected'::text
WHEN bloatidx = 2
THEN 'significant amount of bloat suspected'::text
WHEN bloatidx = -1
THEN 'diagnosis inconclusive or no bloat suspected'::text
END AS bloatdiag
FROM
(
SELECT
CASE
WHEN $3 = 't' THEN 0
WHEN $1 < 10 AND $2 = 0 THEN -1
WHEN $2 = 0 THEN 2
WHEN $1 < $2 THEN 0
WHEN ($1/$2)::numeric > 10 THEN 2
WHEN ($1/$2)::numeric > 3 THEN 1
ELSE -1
END AS bloatidx
) AS bloatmapping
$$
LANGUAGE SQL READS SQL DATA;
GRANT EXECUTE ON FUNCTION gp_toolkit.gp_bloat_diag(int, numeric, bool, OUT int, OUT text) TO public;
CREATE OR REPLACE VIEW gp_toolkit.gp_bloat_diag
AS
SELECT
btdrelid AS bdirelid,
fnnspname AS bdinspname,
fnrelname AS bdirelname,
btdrelpages AS bdirelpages,
btdexppages AS bdiexppages,
bltdiag(bd) AS bdidiag
FROM
(
SELECT
fn.*, beg.*,
gp_toolkit.gp_bloat_diag(btdrelpages::int, btdexppages::numeric, iao.iaotype::bool) AS bd
FROM
gp_toolkit.gp_bloat_expected_pages beg,
pg_catalog.pg_class pgc,
gp_toolkit.__gp_fullname fn,
gp_toolkit.__gp_is_append_only iao
WHERE beg.btdrelid = pgc.oid
AND pgc.oid = fn.fnoid
AND iao.iaooid = pgc.oid
) as bloatsummary
WHERE bltidx(bd) > 0;
GRANT SELECT ON TABLE gp_toolkit.gp_bloat_diag TO public;
COMMIT;