MySQL Cluster NDB 7.1.9 is a new release of MySQL Cluster, incorporating new features in the NDBCLUSTERstorage engine and fixing recently discovered bugs in MySQL Cluster NDB 7.1.8 and previous MySQL Cluster releases.

Obtaining MySQL Cluster NDB 7.1. The latest MySQL Cluster NDB 7.1 binaries for supported platforms can be obtained from http://dev.mysql.com/downloads/cluster/. Source code for the latest MySQL Cluster NDB 7.1 release can be obtained from the same location.

This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.51 (seeSection D.1.4, “Changes in MySQL 5.1.51 (10 September 2010)”).

Functionality added or changed:

  • Important Change: InnoDB Storage Engine: Building the MySQL Server with the InnoDB plugin is now supported when building MySQL Cluster. For more information, see Section 17.2.1.1, “MySQL Cluster Multi-Computer Installation”. (Bug#54912)

    See also Bug#58283.

  • Important Change: ndbd now bypasses use of Non-Uniform Memory Access support on Linux hosts by default. If your system supports NUMA, you can enable it and override ndbd use of interleaving by setting the Numa data node configuration parameter which is added in this release. See Defining Data Nodes: Realtime Performance Parameters, for more information. (Bug#57807)
  • A new diskpagebuffer table, providing statistics on disk page buffer usage by Disk Data tables, is added to thendbinfo information database. These statistics can be used to monitor performance of reads and writes on Disk Data tables, and to assist in the tuning of related parameters such as DiskPageBufferMemory. For more information, see Section 17.5.8.4, “The ndbinfo diskpagebuffer Table”.

Bugs fixed:

  • Packaging: MySQL Cluster RPM distributions did not include a shared-compat RPM for the MySQL Server, which meant that MySQL applications depending on libmysqlclient.so.15 (MySQL 5.0 and earlier) no longer worked. (Bug#38596)
  • Partitioning: Trying to use the same column more than once in the partitioning key when partitioning a table byKEY caused mysqld to crash. Such duplication of key columns is now expressly disallowed, and fails with an appropriate error. (Bug#53354)
  • On Windows, the angel process which monitors and (when necessary) restarts the data node process failed to spawn a new worker in some circumstances where the arguments vector contained extra items placed at its beginning. This could occur when the path to ndbd.exe or ndbmtd.exe contained one or more spaces. (Bug#57949)
  • The disconnection of an API or management node due to missed heartbeats led to a race condition which could cause data nodes to crash. (Bug#57946)
  • The method for calculating table schema versions used by schema transactions did not follow the established rules for recording schemas used in the P0.SchemaLog file. (Bug#57897)

    See also Bug#57896.

  • The LQHKEYREQ request message used by the local query handler when checking the major schema version of a table, being only 16 bits wide, could cause this check to fail with an Invalid schema version error (NDB error code 1227). This issue occurred after creating and dropping (and re-creating) the same table 65537 times, then trying to insert rows into the table. (Bug#57896)

    See also Bug#57897.

  • Data nodes compiled with gcc 4.5 or higher crashed during startup. (Bug#57761)
  • Transient errors during a local checkpoint were not retried, leading to a crash of the data node. Now when such errors occur, they are retried up to 10 times if necessary. (Bug#57650)
  • ndb_restore now retries failed transactions when replaying log entries, just as it does when restoring data. (Bug#57618)
  • The SUMA kernel block has a 10-element ring buffer for storing out-of-order SUB_GCP_COMPLETE_REP signals received from the local query handlers when global checkpoints are completed. In some cases, exceeding the ring buffer capacity on all nodes of a node group at the same time caused the node group to fail with an assertion. (Bug#57563)
  • During a GCP takeover, it was possible for one of the data nodes not to receive a SUB_GCP_COMPLETE_REPsignal, with the result that it would report itself as GCP_COMMITTING while the other data nodes reportedGCP_PREPARING. (Bug#57522)
  • Specifying a WHERE clause of the form range1 OR range2 when selecting from an NDB table having a primary key on multiple columns could result in Error 4259 Invalid set of range scan bounds if range2 started exactly where range1 ended and the primary key definition declared the columns in a different order relative to the order in the table’s column list. (Such a query should simply return all rows in the table, since any expressionvalue < constant OR value >= constant is always true.)

    Example. Suppose t is an NDB table defined by the following CREATE TABLE statement:

    CREATE TABLE t (a, b, PRIMARY KEY (b, a)) ENGINE NDB;

    This issue could then be triggered by a query such as this one:

    SELECT * FROM t WHERE b < 8 OR b >= 8;

    In addition, the order of the ranges in the WHERE clause was significant; the issue was not triggered, for example, by the query SELECT * FROM t WHERE b <= 8 OR b > 8. (Bug#57396)

  • A number of cluster log warning messages relating to deprecated configuration parameters contained spelling, formatting, and other errors. (Bug#57381)
  • The MAX_ROWS option for CREATE TABLE was ignored, which meant that it was not possible to enable multi-threaded building of indexes. (Bug#57360)
  • A GCP stop is detected using 2 parameters which determine the maximum time that a global checkpoint or epoch can go unchanged; one of these controls this timeout for GCPs and one controls the timeout for epochs. Suppose the cluster is configured such that TimeBetweenEpochsTimeout is 100 ms but HeartbeatIntervalDbDb is 1500 ms. A node failure can be signalled after 4 missed heartbeats—in this case, 6000 ms. However, this would exceedTimeBetweenEpochsTimeout, causing false detection of a GCP. To prevent this from happening, the configured value for TimeBetweenEpochsTimeout is automatically adjusted, based on the values ofHeartbeatIntervalDbDb and ArbitrationTimeout.

    The current issue arose when the automatic adjustment routine did not correctly take into consideration the fact that, during cascading node-failures, several intervals of length 4 * (HeartbeatIntervalDBDB + ArbitrationTimeout) may elapse before all node failures have internally been resolved. This could cause false GCP detection in the event of a cascading node failure. (Bug#57322)

  • Successive CREATE NODEGROUP and DROP NODEGROUP commands could cause mysqld processes to crash. (Bug#57164)
  • Queries using WHERE varchar_pk_column LIKE 'pattern%' or WHERE varchar_pk_column LIKE 'pattern_' against an NDB table having a VARCHAR column as its primary key failed to return all matching rows. (Bug#56853)
  • Aborting a native NDB backup in the ndb_mgm client using the ABORT BACKUP command did not work correctly when using ndbmtd, in some cases leading to a crash of the cluster. (Bug#56285)
  • When a data node angel process failed to fork off a new worker process (to replace one that had failed), the failure was not handled. This meant that the angel process either transformed itself into a worker process, or itself failed. In the first case, the data node continued to run, but there was no longer any angel to restart it in the event of failure, even with StopOnError set to 0. (Bug#53456)
  • Disk Data: When performing online DDL on Disk Data tables, scans and moving of the relevant tuples were done in more or less random order. This fix causes these scans to be done in the order of the tuples, which should improve performance of such operations due to the more sequential ordering of the scans. (Bug#57848)

    See also Bug#57827.

  • Disk Data: Adding unique indexes to NDB Disk Data tables could take an extremely long time. This was particularly noticeable when using ndb_restore --rebuild-indexes. (Bug#57827)
  • Cluster Replication: The OPTION_ALLOW_BATCHING bitmask had the same value as OPTION_PROFILING. This caused conflicts between using --slave-allow-batching and profiling. (Bug#57603)
  • Cluster Replication: Replication of SET and ENUM columns represented using more than 1 byte (that is, SETcolumns with more than 8 members and ENUM columns with more than 256 constants) between platforms using different endianness failed when using the row-based format. This was because columns of these types are represented internally using integers, but the internal functions used by MySQL to handle them treated them as strings. (Bug#52131)

    See also Bug#53528.

  • Cluster API: An application dropping a table at the same time that another application tried to set up a replication event on the same table could lead to a crash of the data node. The same issue could sometimes causeNdbEventOperation::execute() to hang. (Bug#57886)
  • Cluster API: An NDB API client program under load could abort with an assertion error inTransporterFacade::remove_from_cond_wait_queue. (Bug#51775)

    View the Complete List of Changes »

    Download Now »

Post By Gishore J Kallarackal (2,121 Posts)

Gishore J Kallarackal is the founder of techgurulive. The purpose of this site is to share information about free resources that techies can use for reference. You can follow me on the social web, subscribe to the RSS Feed or sign up for the email newsletter for your daily dose of tech tips & tutorials. You can content me via @twitter or e-mail.

Website: → Techgurulive

Connect