Apa itu routin mysql database

MySQL Router is lightweight middleware that provides transparent routing between your application and any backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers. The pluggable architecture also enables developers to extend MySQL Router for custom use cases.

Failover

Typically, a highly available MySQL setup consists of a single primary and multiple replicas and it is up to the application to handle failover, in case the MySQL primary becomes unavailable. Using MySQL Router, application connections will be transparently routed based on load balancing policy, without implementing custom application code.

Load Balancing

MySQL Router provides additional scalability and performance by distributing database connections across a pool of servers. For example, if you have a replicated set of MySQL Servers, MySQL Router can distribute application connections to them in a round-robin fashion.

Pluggable Architecture

MySQL Router's pluggable architecture allows MySQL developers to easily extend the product with additional features, as well as providing MySQL users with the ability to create their own custom plugins providing endless possibilities. MySQL Router currently ships with a number of core plugins, including:

  • Connection Routing plugin which does connection-based routing, meaning that it forwards the MySQL packets to the backend server without inspecting or modifying them, thus providing maximum throughput.
  • The Metadata Cache plugin, which provides transparent client load balancing, routing, and failover into Group Replication and InnoDB Clusters.

Apa itu routin mysql database

MySQL Router transparently routes connections within a high availability group

Additional Resources

  • Documentation
  • Forum

The Connection Routing plugin performs connection-based routing, meaning it forwards packets to the server without inspecting them. This is a simplistic approach that provides high throughput. For additional general information about connection routing, see Section 1.3, “Connection Routing”.

A simple connection-based routing setup is shown below. These and additional options are documented under Section 4.3.3, “Configuration File Options”.

[logger]
level = INFO

[routing:secondary]
bind_address = localhost
bind_port = 7001
destinations = foo.example.org:3306,bar.example.org:3306,baz.example.org:3306
routing_strategy = round-robin

[routing:primary]
bind_address = localhost
bind_port = 7002
destinations = foo.example.org:3306,bar.example.org:3306
routing_strategy = first-available

Here we use connection routing to round-robin MySQL connections to three MySQL servers on port 7001 as defined by round-robin routing_strategy. This example also configures the first-available strategy for two of the servers using port 7002. The first-available strategy uses the first available server from the destinations list. The number of MySQL instances assigned to each destinations is up to you as this is only an example. Router does not inspect the packets and does not restrict connections based on assigned strategy or mode, so it is up the application to determine where to send read and write requests, which is either port 7001 or 7002 in our example.

Note

Before MySQL Router 8.0, the now deprecated mode option was used instead of the routing_strategy option that was added in MySQL Router 8.0.

Assuming all three MySQL instances are running, next start MySQL Router by passing in the configuration file:

$> ./bin/mysqlrouter -config=/etc/mysqlrouter-config.conf

Now MySQL Router is listening to port's 7001 and 7002 and sends requests to the appropriate MySQL instances. For example:

$> ./bin/mysql --user=root --port 7001 --protocol=TCP

That will first connect to foo.example.org, and then bar.example.org next, then baz.example.org, and the fourth call goes back to foo.example.org. Instead, we configured port 7002 behavior differently:

$> ./bin/mysql --user=root --port 7002 --protocol=TCP

That first connects to foo.example.org, and additional requests will continue connecting to foo.example.org until there is a failure, at which point bar.example.org is now used. For additional information about this behavior, see mode.

MySQL Router is part of InnoDB Cluster and is lightweight middleware that provides transparent routing between your application and back-end MySQL Servers. It is used for a wide variety of use cases, such as providing high availability and scalability by routing database traffic to appropriate back-end MySQL servers. The pluggable architecture also enables developers to extend MySQL Router for custom use cases.

For additional details about how Router is part of InnoDB Cluster, see MySQL AdminAPI.

Introduction

For client applications to handle failover, they need to be aware of the InnoDB cluster topology and know which MySQL instance is the PRIMARY. While it is possible for applications to implement that logic, MySQL Router can provide and handle this functionality for you.

MySQL uses Group Replication to replicate databases across multiple servers while performing automatic failover in the event of a server failure. When used with a MySQL InnoDB Cluster, MySQL Router acts as a proxy to hide the multiple MySQL instances on your network and map the data requests to one of the cluster instances. As long as there are enough online replicas and communication between the components is intact, applications will be able to contact one of them. MySQL Router also makes this possible by having applications connect to MySQL Router instead of directly to MySQL.

Deploying Router with MySQL InnoDB Cluster

The recommended deployment model for MySQL Router is with InnoDB Cluster, with Router sitting on the same host as the application.

The steps for deploying MySQL Router with an InnoDB Cluster after configuring the cluster are:

  1. Install MySQL Router.

  2. Bootstrap InnoDB Cluster, and test.

    Bootstrapping automatically configures MySQL Router for an existing InnoDB Cluster by using --bootstrap and other command-line options. During bootstrap, Router connects to the cluster, fetches its metadata, and configures itself for use. Bootstrapping is optional.

    For additional information, see Chapter 3, Deploying MySQL Router.

  3. Set up MySQL Router for automatic startup.

    Configure your system to automatically start MySQL Router when the host is rebooted, a process similar to how the MySQL server is configured to start automatically. For additional details, see Section 5.1, “Starting MySQL Router”.

For example, after creating a MySQL InnoDB Cluster, you might configure MySQL Router using:

$> mysqlrouter --bootstrap localhost:3310 --directory /opt/myrouter --user snoopy

This example bootstraps MySQL Router to an existing InnoDB Cluster where:

  • localhost:3310 is a member of an InnoDB cluster, and either the PRIMARY or bootstrap will redirect to a PRIMARY in the cluster.

  • Because the optional --directory bootstrap option was used, this example creates a self-contained installation with all generated directories and files at /opt/myrouter/. These files include start.sh, stop.sh, log/, and a fully functional MySQL Router configuration file named mysqlrouter.conf.

  • Only the host's system user named snoopy will have access to /opt/myrouter/*.

See --bootstrap and related options for ways to modify the bootstrap configuration process. For example, passing in --conf-use-sockets enables Unix domain socket connections because only TCP/IP connections are enabled by default.