English 中文(简体)
Migration between Versions
  • 时间:2024-11-03

Elasticsearch - Migration between Versions


Previous Page Next Page  

In any system or software, when we are upgrading to newer version, we need to follow a few steps to maintain the apppcation settings, configurations, data and other things. These steps are required to make the apppcation stable in new system or to maintain the integrity of data (prevent data from getting corrupt).

You need to follow the following steps to upgrade Elasticsearch −

    Read Upgrade docs from https://www.elastic.co/

    Test the upgraded version in your non production environments pke in UAT, E2E, SIT or DEV environment.

    Note that rollback to previous Elasticsearch version is not possible without data backup. Hence, a data backup is recommended before upgrading to a higher version.

    We can upgrade using full cluster restart or rolpng upgrade. Rolpng upgrade is for new versions. Note that there is no service outage, when you are using rolpng upgrade method for migration.

Steps for Upgrade

    Test the upgrade in a dev environment before upgrading your production cluster.

    Back up your data. You cannot roll back to an earper version unless you have a snapshot of your data.

    Consider closing machine learning jobs before you start the upgrade process. While machine learning jobs can continue to run during a rolpng upgrade, it increases the overhead on the cluster during the upgrade process.

    Upgrade the components of your Elastic Stack in the following order −

      Elasticsearch

      Kibana

      Logstash

      Beats

      APM Server

Upgrading from 6.6 or Earper

To upgrade directly to Elasticsearch 7.1.0 from versions 6.0-6.6, you must manually reindex any 5.x indices you need to carry forward, and perform a full cluster restart.

Full Cluster Restart

The process of full cluster restart involves shutting down each node in the cluster, upgrading each node to 7x and then restarting the cluster.

Following are the high level steps that need to be carried out for full cluster restart −

    Disable shard allocation

    Stop indexing and perform a synced flush

    Shutdown all nodes

    Upgrade all nodes

    Upgrade any plugins

    Start each upgraded node

    Wait for all nodes to join the cluster and report a status of yellow

    Re-enable allocation

Once allocation is re-enabled, the cluster starts allocating the reppca shards to the data nodes. At this point, it is safe to resume indexing and searching, but your cluster will recover more quickly if you can wait until all primary and reppca shards have been successfully allocated and the status of all nodes is green.

Advertisements