OpenEBS Releases
Release Version | Notes | Highlights |
---|---|---|
1.0.0 | Latest Release (Recommended) Release Notes Release Blog Upgrade Steps | - Introduced a cluster level component called NDM operator to manages the access to block devices, selecting & binding BD to BDC, cleaning up the data from the released BD. - Support for using Block Devices for OpenEBS Local PV. - Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions. - Enhanced the cStor Data Engine containers to contain troubleshooting utilities. - Enhanced the metrics exported by cStor Pools to include details of the provisioning errors. - Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail. - Fixes an issue where cStor volume used space was showing a very low value than actually used. - Fixes an issue where Jiva replicas failed to register with its target if there was an error during initial registration. - Fixes an issue where NDM would create a partitioned OS device as a block device. - Fixes an issue where Jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion. - Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs. - Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial. - Custom resource (Disk) used in earlier releases has been changed to Block Device. |
0.9.0 | Release Notes | Enhanced the cStor Data Engine containers to contain troubleshooting utilities.-Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions. -Support for using Block Devices for OpenEBS Local PV. - Support for Dynamic Provisioning of Local PV - Enhanced the cStor Volumes to support Backup/Restore to S3 compatible storage using the incremental snapshots supported by cStor Volumes. - Enhanced the cStor Volume Replica to support an anti-affinity feature that works across PVs. - Enhanced the cStor Volume to support scheduling the cStor Volume Targets along side the application pods that interacts with the cStor Volume. - Enhances the Jiva Volume provisioning to provide an option called DeployInOpenEBSNamespace. - Enhanced the cStor Volume Provisioning to be customized for varying workload or platform type during the volume provisioning. - Enhanced the cStor Pools to export usage statistics as prometheus metrics. - Enhanced the Jiva Volume replica rebuild process by eliminating the need to do a rebuild if the Replica already has all the required data to serve the IO. - Enhanced the Jiva Volume - replica provisioning to pin the Replica’s to the nodes where they are initially scheduled using Kubernetes nodeAffinity. - Fixes an issue where NDM pods failed to start on nodes with selinux=on. - Fixes an issue where cStor Volume with single replicas were shown to be in Degraded, rebuilding state. - Fixes an issue where user was able to delete a PVC, even if there were clones created from it, resulting in data loss for the cloned volumes. - Fixes an issue where user was able to delete a PVC, even if there were clones created from it, resulting in data loss for the cloned volumes. - Fixes an issue where cStor Volumes failed to provision if the /var/openebs/ directory was not editable by cStor pods like in the case of SuSE Platforms.- Fixes an issue where Jiva Volume - Target can mark a replica as offline if the replica takes longer than 30s to complete the sync/unmap IO. - Fixes an issue with Jiva volume - space reclaim thread, that was erroring out with an exception if the replica is disconnected from the target. |
0.8.2 | Release Notes | Enhanced the metrics exported by cStor Pools to include details of the provisioning errors.-Enhanced the cStor Data Engine containers to contain troubleshooting utilities.-Enhanced cStor Data Engine to allow interoperability of cStor Replicas across different versions. - Fixed an issue causing cStor Volume Replica CRs to be stuck, when the OpenEBS namespace was being deleted. - Fixed an issue where a newly added cStor Volume Replica may not be successfully registered with the cStor target, if the cStor tries to connect to Replica before the replica is completely initialised. - Fixed an issue with Jiva Volumes where target can mark the Replica as Timed out on IO, even when the Replica might actually be processing the Sync IO. - Fixed an issue with Jiva Volumes that would not allow for Replicas to re-connect with the Target, if the initial Registration failed to successfully process the hand-shake request. - Fixed an issue with Jiva Volumes that would cause Target to restart when a send diagnostic command was received from the client - Fixed an issue causing PVC to be stuck in pending state, when there were more than one PVCs associated with an Application Pod - Toleration policy support for cStorStoragePool. |
0.8.1 | Release Blog Release Notes | Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.-Enhanced the metrics exported by cStor Pools to include details of the provisioning errors.-Enhanced the cStor Data Engine containers to contain troubleshooting utilities. - Ephemeral Disk Support - Enhanced the placement of cStor volume replica in a distriubuted randomnly between the available pools. - Enhanced the NDM to fetch additional details about the underlying disks via SeaChest. - Enhanced the NDM to add additional information to the DiskCRs like if the disks is partitioned or has a filesystem on it. - Enhanced the OpenEBS CRDs to include custom columns to be displayed using kubectl get output of the CR. This feature requires K8s 1.11 or higher.- Fixed an issue where cStor volume causes timeout for iSCSI discovery command and can potentially trigger a K8s vulnerability that can bring down a node with high RAM usage. |
0.8.0 | Release Blog Release Notes | Fixes an issue where cStor volume used space was showing a very low value than actually used.-Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail.-Enhanced the metrics exported by cStor Pools to include details of the provisioning errors. - cStor Snapshot & Clone - cStor volume & Pool runtime status - Target Affinity for both Jiva & cStor - Target namespace for cStor - Enhance the volume metrics exporter - Enhance Jiva to clear up internal snapshot taken during Replica rebuild - Enhance Jiva to support sync and unmap IOs - Enhance cStor for recreating pool by automatically selecting the disks. |
0.7.2 | Release Notes | Fixes an issue where jiva replicas failed to register with its target if there was an error during initial registration.-Fixes an issue where cStor volume used space was showing a very low value than actually used.-Fixes an issue where cStor replica snapshots created for the rebuild were not deleted, causing space reclamation to fail. - Support for clearing sapce used by Jiva replica after the volume is deleted using Cron Job. - Support for a storage policy that can disable the Jiva Volume Space reclaim. - Support Target Affinity fort Jiva target Pod on the same node as the Application Pod. - Enahanced Jiva related to internal snapshots for rebuilding Jiva. - Enhanced exporting cStor volume metrics to prometheus |
0.7.0 | Release Blog Release Notes | Fixes an issue where NDM would create a partitioned OS device as a block device.-Fixes an issue where jiva replicas failed to register with its target if there was an error during initial registration.-Fixes an issue where cStor volume used space was showing a very low value than actually used. - Enhanced NDM to discover block devices attached to Nodes . - Alpha support for cStor Engine - Naming convention of Jiva Storage pool as ‘default’ and StorageClass as ‘openebs-jiva-default’ - Naming convention of cStor Storage pool as ‘cstor-sparse-pool’ and StorageClass as ‘openebs-cstor-sparse’ - Support for specifying replica count,CPU/Memory Limits per PV,Choice of Storage Engine, Nodes on which data copies should be copied. |
0.6.0 | Release Blog Release Notes | Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.-Fixes an issue where NDM would create a partitioned OS device as a block device.-Fixes an issue where jiva replicas failed to register with its target if there was an error during initial registration. - Integrate the Volume Snapshot capabilities with Kubernetes Snapshot controller. - Enhance maya-apiserver to use CAS Templates for orchestrating new Storage Engines. - Enhance mayactl to show details about replica and Node details where replicas are running. - Enhance maya-apiserver to schedule Replica Pods on specific nodes using nodeSelector. - Enhance e2e tests to simulate chaos at different layers such as - CPU, RAM, Disk, Network, and Node. - Support for deploying OpenEBS via Kubernetes stable Helm Charts. - Enhanced Jiva volume to handle more read only volume scenarios |
0.5.4 | Release Notes | Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.-Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion.-Fixes an issue where NDM would create a partitioned OS device as a block device. - Provision to specify filesystems other than ext4 (default). - Support for XFS filesystem format for mongodb StatefulSet using OpenEBS Persistent Volume. - Increased integration test & e2e coverage in the CI - OpenEBS is now available as a stable chart from Kubernetes |
0.5.3 | Release Notes | Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.-Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs.-Fixes an issue where jiva replica data was not clean up if the PVC and its namespace were deleted prior to scrub job completion. - Fixed usage of StoragePool issue when RBAC settings are applied - Enhanced memory consumption usage for Jiva Volume |
0.5.2 | Release Notes | Changed the custom resource (Disk) used in earlier releases has been changed to Block Device.-Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial.-Fixes an issue where Velero Backup/Restore was not working with hostpath Local PVs. - Support to set non-SSL Kubernetes endpoints to use by specifying the ENV variables on maya-apiserver and openebs-provisioner. |
0.5.1 | Release Notes | -Changed the custom resource (Disk) used in earlier releases has been changed to Block Device.-Upgraded the base ubuntu images for the containers to fix the security vulnerabilities reported in Ubuntu Xenial. - Support to use Jiva volume from CentOS iSCSI Initiator - Support openebs-k8s-provisioner to be launched in non-default namespace |
0.5.0 | Release Blog Release Notes | -Changed the custom resource (Disk) used in earlier releases has been changed to Block Device. - Enhanced Storage Policy Enforcement Framework for Jiva. - Extend OpenEBS API Server to expose volume snapshot API. - Support for deploying OpenEBS via helm charts. - Sample Prometheus configuration for collecting OpenEBS Volume Metrics. - Sample Grafana OpenEBS Volume Dashboard - using the prometheus Metrics |
0.4.0 | Release Blog Release Notes | - Enhanced MAYA cli support for managing snapshots,usage statistics. - Support OpenEBS Maya API Server uses the Kubernetes scheduler logic to place OpenEBS Volume Replicas on different nodes - Support Extended deployment of OpenEBS in AWS. - Support OpenEBS can be deployed in a minikube setup. - Enhanced openebs-k8s-provisioner from crashloopbackoff state |
0.3.0 | Release Blog Release Notes | - Support OpenEBS hyper-converged with Kubernetes Minion Nodes. - Enable OpenEBS via the openebs-operator.yaml - Supports creation of OpenEBS volumes using Dynamic Provisioner. - Storage functionality and Orchestration/Management functionality is delivered as container images on DockerHub. |
0.2.0 | Release Blog Release Notes | - Integrated OpenEBS FlexVolume Driver and Dynamically Provision OpenEBS Volumes into Kubernetes. - Support Maya api server to provides new AWS EBS-like API for provisioning Block Storage. - Enhanced Maya api server to Hyper Converged with Nomad Scheduler. - Backup/Restore Data from Amazon S3. - Node Failure Resiliency Fixes |