博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Apache Kafka源码分析 – Controller
阅读量:5754 次
发布时间:2019-06-18

本文共 47417 字,大约阅读时间需要 158 分钟。

Controller是为了加入replica机制而创建的,0.7时broker之间没有很强的关联,而由于现在每个topic partition都需要考虑,将replicas放在哪个broker上,并要处理例如reassignment或delete等操作,所以需要有个master来协调,于是就加入了controller

One of the brokers is elected as the controller for the whole cluster. It will be responsible for:

  1. Leadership change of a partition (each leader can independently update ISR)
  2. New topics; deleted topics
  3. Replica re-assignment

After the controller makes a decision, it publishes the decision permanently in ZK and also sends the new decisions to affected brokers through direct RPC. The published decisions are the source of truth and they are used by clients for request routing and by each broker during startup to recover its state. After the broker is started, it picks up new decisions made by the controller through RPC.

Potential benefits:

  1. Easier debugging since leadership changes are made in a central place.
  2. ZK reads/writes needed for leadership changes can be batched (also easier to exploit ZK multi) and thus reduce end-to-end latency during failover.
  3. Fewer ZK watchers.
  4. More efficient communication of state changes by using direct RPC, instead of via a queue implementation in Zookeeper.

Potential downside:

  1. Need controller failover. 

ControllerContext

其中关键是记录所有topics的partitions和replicas间关系,包括assignment和leadship关系

package kafka.controllerclass ControllerContext(val zkClient: ZkClient,                        val zkSessionTimeout: Int) {  var controllerChannelManager: ControllerChannelManager = null  val controllerLock: ReentrantLock = new ReentrantLock()  var shuttingDownBrokerIds: mutable.Set[Int] = mutable.Set.empty  val brokerShutdownLock: Object = new Object  var epoch: Int = KafkaController.InitialControllerEpoch - 1  var epochZkVersion: Int = KafkaController.InitialControllerEpochZkVersion - 1  val correlationId: AtomicInteger = new AtomicInteger(0)  //Topics,partitions和replica间的关系  var allTopics: Set[String] = Set.empty  var partitionReplicaAssignment: mutable.Map[TopicAndPartition, Seq[Int]] = mutable.Map.empty  var partitionLeadershipInfo: mutable.Map[TopicAndPartition, LeaderIsrAndControllerEpoch] = mutable.Map.empty  var partitionsBeingReassigned: mutable.Map[TopicAndPartition, ReassignedPartitionsContext] = new mutable.HashMap  var partitionsUndergoingPreferredReplicaElection: mutable.Set[TopicAndPartition] = new mutable.HashSet  private var liveBrokersUnderlying: Set[Broker] = Set.empty  private var liveBrokerIdsUnderlying: Set[Int] = Set.empty

PartitionStateMachine

加入replica是很复杂的设计,其中重要的一点就是要考虑partitions和replicas中不同情况下的处理

这里先看看Partition的各种state,以及state之间的状态机
NonExistentPartition,不存在或被删掉的p,前一个状态是OfflinePartition
NewPartition, 刚被创建,但还没有完成leader election, 前一个状态是NonExistentPartition
OnlinePartition,具有leader,正常状态,前一个状态是NewPartition/OfflinePartition
OfflinePartition,leader die的时候partition会变为offline,前一个状态是NewPartition/OnlinePartition
这里NewPartition和OfflinePartition都是没有leader的状态,为何要区别开来?见下

/** * This class represents the state machine for partitions. It defines the states that a partition can be in, and * transitions to move the partition to another legal state. The different states that a partition can be in are - * 1. NonExistentPartition: This state indicates that the partition was either never created or was created and then *                          deleted. Valid previous state, if one exists, is OfflinePartition * 2. NewPartition        : After creation, the partition is in the NewPartition state. In this state, the partition should have *                          replicas assigned to it, but no leader/isr yet. Valid previous states are NonExistentPartition * 3. OnlinePartition     : Once a leader is elected for a partition, it is in the OnlinePartition state. *                          Valid previous states are NewPartition/OfflinePartition * 4. OfflinePartition    : If, after successful leader election, the leader for partition dies, then the partition *                          moves to the OfflinePartition state. Valid previous states are NewPartition/OnlinePartition */sealed trait PartitionState { def state: Byte }case object NewPartition extends PartitionState { val state: Byte = 0 }case object OnlinePartition extends PartitionState { val state: Byte = 1 }case object OfflinePartition extends PartitionState { val state: Byte = 2 }case object NonExistentPartition extends PartitionState { val state: Byte = 3 }

核心函数handleStateChanges,对于每个topicAndPartition调用状态机函数handleStateChange,然后把结果告诉每个brokers

/**   * This API is invoked by the partition change zookeeper listener   * @param partitions   The list of partitions that need to be transitioned to the target state   * @param targetState  The state that the partitions should be moved to   */  def handleStateChanges(partitions: Set[TopicAndPartition], targetState: PartitionState,                         leaderSelector: PartitionLeaderSelector = noOpPartitionLeaderSelector,                         callbacks: Callbacks = (new CallbackBuilder).build) {    try {      brokerRequestBatch.newBatch()      partitions.foreach { topicAndPartition =>        handleStateChange(topicAndPartition.topic, topicAndPartition.partition, targetState, leaderSelector, callbacks)      }      brokerRequestBatch.sendRequestsToBrokers(controller.epoch, controllerContext.correlationId.getAndIncrement)    }catch {    }  }

状态机函数的具体逻辑, 其他的状态变化都比较简单

唯有,在变成OnlinePartition的时候,需要区分new或offline两种状况,下面具体看下

/**   * This API exercises the partition's state machine. It ensures that every state transition happens from a legal   * previous state to the target state. Valid state transitions are:   * NonExistentPartition -> NewPartition:   * --load assigned replicas from ZK to controller cache   *   * NewPartition -> OnlinePartition   * --assign first live replica as the leader and all live replicas as the isr; write leader and isr to ZK for this partition   * --send LeaderAndIsr request to every live replica and UpdateMetadata request to every live broker   *   * OnlinePartition,OfflinePartition -> OnlinePartition   * --select new leader and isr for this partition and a set of replicas to receive the LeaderAndIsr request, and write leader and isr to ZK   * --for this partition, send LeaderAndIsr request to every receiving replica and UpdateMetadata request to every live broker   *   * NewPartition,OnlinePartition,OfflinePartition -> OfflinePartition   * --nothing other than marking partition state as Offline   *   * OfflinePartition -> NonExistentPartition   * --nothing other than marking the partition state as NonExistentPartition   * @param topic       The topic of the partition for which the state transition is invoked   * @param partition   The partition for which the state transition is invoked   * @param targetState The end state that the partition should be moved to   */  private def handleStateChange(topic: String, partition: Int, targetState: PartitionState,                                leaderSelector: PartitionLeaderSelector,                                callbacks: Callbacks) {    val topicAndPartition = TopicAndPartition(topic, partition)    val currState = partitionState.getOrElseUpdate(topicAndPartition, NonExistentPartition)    try {      targetState match {        case NewPartition =>          // pre: partition did not exist before this          assertValidPreviousStates(topicAndPartition, List(NonExistentPartition), NewPartition) // 判断前个状态是否valid          assignReplicasToPartitions(topic, partition) // 从zk读取分配的replicas          partitionState.put(topicAndPartition, NewPartition) // 将partition state设为NewPartition          val assignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition).mkString(",") // 获取AR          // post: partition has been assigned replicas        case OnlinePartition =>          assertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OnlinePartition)          partitionState(topicAndPartition) match {            case NewPartition =>              // initialize leader and isr path for new partition              initializeLeaderAndIsrForPartition(topicAndPartition) // 初始化leader            case OfflinePartition =>              electLeaderForPartition(topic, partition, leaderSelector) // 选取新的leader            case OnlinePartition => // invoked when the leader needs to be re-elected              electLeaderForPartition(topic, partition, leaderSelector)            case _ => // should never come here since illegal previous states are checked above          }          partitionState.put(topicAndPartition, OnlinePartition)           // post: partition has a leader        case OfflinePartition =>          // pre: partition should be in New or Online state          assertValidPreviousStates(topicAndPartition, List(NewPartition, OnlinePartition, OfflinePartition), OfflinePartition)          // should be called when the leader for a partition is no longer alive          partitionState.put(topicAndPartition, OfflinePartition) // 修改state          // post: partition has no alive leader        case NonExistentPartition =>          // pre: partition should be in Offline state          assertValidPreviousStates(topicAndPartition, List(OfflinePartition), NonExistentPartition)          partitionState.put(topicAndPartition, NonExistentPartition) // 修改state          // post: partition state is deleted from all brokers and zookeeper      }    } catch {    }  }

对于初始化leader只需要取出liveAssignedReplicas的head

而对于offline,需要优先ISR中的,然后才是AR中的 (初始时,AR应该=ISR,但因为会通过shrink isr来调整isr列表,所以AR应该是>=ISR的)

NewPartition->OnlinePartition

/**   * Invoked on the NewPartition->OnlinePartition state change. When a partition is in the New state, it does not have   * a leader and isr path in zookeeper. Once the partition moves to the OnlinePartition state, it's leader and isr   * path gets initialized and it never goes back to the NewPartition state. From here, it can only go to the   * OfflinePartition state.   * @param topicAndPartition   The topic/partition whose leader and isr path is to be initialized   */  private def initializeLeaderAndIsrForPartition(topicAndPartition: TopicAndPartition) {    val replicaAssignment = controllerContext.partitionReplicaAssignment(topicAndPartition)    val liveAssignedReplicas = replicaAssignment.filter(r => controllerContext.liveBrokerIds.contains(r))    liveAssignedReplicas.size match {      case 0 => // 没有live的replica...报错      case _ =>        // make the first replica in the list of assigned replicas, the leader        val leader = liveAssignedReplicas.head  // 只是将liveAssignedReplicas的第一个作为leader        val leaderIsrAndControllerEpoch = new LeaderIsrAndControllerEpoch(new LeaderAndIsr(leader, liveAssignedReplicas.toList),          controller.epoch)}
OfflinePartition,OnlinePartition->OnlinePartition
/**   * Invoked on the OfflinePartition,OnlinePartition->OnlinePartition state change.   * It invokes the leader election API to elect a leader for the input offline partition   * @param topic               The topic of the offline partition   * @param partition           The offline partition   * @param leaderSelector      Specific leader selector (e.g., offline/reassigned/etc.)   */  def electLeaderForPartition(topic: String, partition: Int, leaderSelector: PartitionLeaderSelector) {    val topicAndPartition = TopicAndPartition(topic, partition)    // handle leader election for the partitions whose leader is no longer alive    try {        // elect new leader or throw exception        val (leaderAndIsr, replicas) = leaderSelector.selectLeader(topicAndPartition, currentLeaderAndIsr) // 使用PartitionLeaderSelector.selectLeader来选leader        val newLeaderIsrAndControllerEpoch = new LeaderIsrAndControllerEpoch(newLeaderAndIsr, controller.epoch)      } catch {    }  }
OfflinePartitionLeaderSelector

package kafka.controller.PartitionLeaderSelector/** * Select the new leader, new isr and receiving replicas (for the LeaderAndIsrRequest): * 1. If at least one broker from the isr is alive, it picks a broker from the live isr as the new leader and the live *    isr as the new isr. * 2. Else, it picks some alive broker from the assigned replica list as the new leader and the new isr. * 3. If no broker in the assigned replica list is alive, it throws NoReplicaOnlineException * Replicas to receive LeaderAndIsr request = live assigned replicas * Once the leader is successfully registered in zookeeper, it updates the allLeaders cache */class OfflinePartitionLeaderSelector(controllerContext: ControllerContext) extends PartitionLeaderSelector with Logging {  def selectLeader(topicAndPartition: TopicAndPartition, currentLeaderAndIsr: LeaderAndIsr): (LeaderAndIsr, Seq[Int]) = {    controllerContext.partitionReplicaAssignment.get(topicAndPartition) match {      case Some(assignedReplicas) =>        val liveAssignedReplicas = assignedReplicas.filter(r => controllerContext.liveBrokerIds.contains(r))        val liveBrokersInIsr = currentLeaderAndIsr.isr.filter(r => controllerContext.liveBrokerIds.contains(r))        val currentLeaderEpoch = currentLeaderAndIsr.leaderEpoch        val currentLeaderIsrZkPathVersion = currentLeaderAndIsr.zkVersion        val newLeaderAndIsr = liveBrokersInIsr.isEmpty match {          case true =>            debug("No broker in ISR is alive for %s. Pick the leader from the alive assigned replicas: %s"              .format(topicAndPartition, liveAssignedReplicas.mkString(",")))            liveAssignedReplicas.isEmpty match {              case true =>                throw new NoReplicaOnlineException(("No replica for partition " +                  "%s is alive. Live brokers are: [%s],".format(topicAndPartition, controllerContext.liveBrokerIds)) +                  " Assigned replicas are: [%s]".format(assignedReplicas))              case false =>                ControllerStats.uncleanLeaderElectionRate.mark()                val newLeader = liveAssignedReplicas.head // 其次选取AR中的replica                warn("No broker in ISR is alive for %s. Elect leader %d from live brokers %s. There's potential data loss."                     .format(topicAndPartition, newLeader, liveAssignedReplicas.mkString(",")))                new LeaderAndIsr(newLeader, currentLeaderEpoch + 1, List(newLeader), currentLeaderIsrZkPathVersion + 1)            }          case false =>            val newLeader = liveBrokersInIsr.head // 优先选取ISR中的replica            debug("Some broker in ISR is alive for %s. Select %d from ISR %s to be the leader."                  .format(topicAndPartition, newLeader, liveBrokersInIsr.mkString(",")))            new LeaderAndIsr(newLeader, currentLeaderEpoch + 1, liveBrokersInIsr.toList, currentLeaderIsrZkPathVersion + 1)        }        info("Selected new leader and ISR %s for offline partition %s".format(newLeaderAndIsr.toString(), topicAndPartition))        (newLeaderAndIsr, liveAssignedReplicas)      case None =>        throw new NoReplicaOnlineException("Partition %s doesn't have".format(topicAndPartition) + "replicas assigned to it")    }  }}

ReplicaStateMachine

对应于Replica也有比较复杂的状态和相应的状态机

NewReplica,新创建的,只能接受become follower请求,前一个状态为NonExistentReplica
OnlineReplica,正常状态,可以接受become leader和become follower,前一个状态为NewReplica, OnlineReplica or OfflineReplica
OfflineReplica ,replica die,一般由于存储该replica的broker down,前一个状态为NewReplica, OnlineReplica
Replica的删除有3种状态,offline的replica就可以被delete
ReplicaDeletionStarted,前一个状态是OfflineReplica
ReplicaDeletionSuccessful,前一个状态是ReplicaDeletionStarted
ReplicaDeletionIneligible, 前一个状态是ReplicaDeletionStarted
NonExistentReplica,被删除成功的replica,前一个状态是ReplicaDeletionSuccessful

/** * This class represents the state machine for replicas. It defines the states that a replica can be in, and * transitions to move the replica to another legal state. The different states that a replica can be in are - * 1. NewReplica        : The controller can create new replicas during partition reassignment. In this state, a *                        replica can only get become follower state change request.  Valid previous *                        state is NonExistentReplica * 2. OnlineReplica     : Once a replica is started and part of the assigned replicas for its partition, it is in this *                        state. In this state, it can get either become leader or become follower state change requests. *                        Valid previous state are NewReplica, OnlineReplica or OfflineReplica * 3. OfflineReplica    : If a replica dies, it moves to this state. This happens when the broker hosting the replica *                        is down. Valid previous state are NewReplica, OnlineReplica * 4. ReplicaDeletionStarted: If replica deletion starts, it is moved to this state. Valid previous state is OfflineReplica * 5. ReplicaDeletionSuccessful: If replica responds with no error code in response to a delete replica request, it is *                        moved to this state. Valid previous state is ReplicaDeletionStarted * 6. ReplicaDeletionIneligible: If replica deletion fails, it is moved to this state. Valid previous state is ReplicaDeletionStarted * 7. NonExistentReplica: If a replica is deleted successfully, it is moved to this state. Valid previous state is *                        ReplicaDeletionSuccessful */sealed trait ReplicaState { def state: Byte }case object NewReplica extends ReplicaState { val state: Byte = 1 }case object OnlineReplica extends ReplicaState { val state: Byte = 2 }case object OfflineReplica extends ReplicaState { val state: Byte = 3 }case object ReplicaDeletionStarted extends ReplicaState { val state: Byte = 4}case object ReplicaDeletionSuccessful extends ReplicaState { val state: Byte = 5}case object ReplicaDeletionIneligible extends ReplicaState { val state: Byte = 6}case object NonExistentReplica extends ReplicaState { val state: Byte = 7 }

和PartitionStateMachine一样,最关键的函数就是状态机函数

/**   * This API exercises the replica's state machine. It ensures that every state transition happens from a legal   * previous state to the target state. Valid state transitions are:   * NonExistentReplica --> NewReplica   * --send LeaderAndIsr request with current leader and isr to the new replica and UpdateMetadata request for the   *   partition to every live broker   *   * NewReplica -> OnlineReplica   * --add the new replica to the assigned replica list if needed   *   * OnlineReplica,OfflineReplica -> OnlineReplica   * --send LeaderAndIsr request with current leader and isr to the new replica and UpdateMetadata request for the   *   partition to every live broker   *   * NewReplica,OnlineReplica,OfflineReplica,ReplicaDeletionIneligible -> OfflineReplica   * --send StopReplicaRequest to the replica (w/o deletion)   * --remove this replica from the isr and send LeaderAndIsr request (with new isr) to the leader replica and   *   UpdateMetadata request for the partition to every live broker.   *   * OfflineReplica -> ReplicaDeletionStarted   * --send StopReplicaRequest to the replica (with deletion)   *   * ReplicaDeletionStarted -> ReplicaDeletionSuccessful   * -- mark the state of the replica in the state machine   *   * ReplicaDeletionStarted -> ReplicaDeletionIneligible   * -- mark the state of the replica in the state machine   *   * ReplicaDeletionSuccessful -> NonExistentReplica   * -- remove the replica from the in memory partition replica assignment cache   * @param partitionAndReplica The replica for which the state transition is invoked   * @param targetState The end state that the replica should be moved to   */  def handleStateChange(partitionAndReplica: PartitionAndReplica, targetState: ReplicaState,                        callbacks: Callbacks) {    val topic = partitionAndReplica.topic    val partition = partitionAndReplica.partition    val replicaId = partitionAndReplica.replica    val topicAndPartition = TopicAndPartition(topic, partition)    val currState = replicaState.getOrElseUpdate(partitionAndReplica, NonExistentReplica)    try {      val replicaAssignment = controllerContext.partitionReplicaAssignment(topicAndPartition)      targetState match {        case NewReplica =>          assertValidPreviousStates(partitionAndReplica, List(NonExistentReplica), targetState)          // start replica as a follower to the current leader for its partition          val leaderIsrAndControllerEpochOpt = ZkUtils.getLeaderIsrAndEpochForPartition(zkClient, topic, partition)          leaderIsrAndControllerEpochOpt match {            case Some(leaderIsrAndControllerEpoch) =>              brokerRequestBatch.addLeaderAndIsrRequestForBrokers(List(replicaId), // 通知其他的broker                                                                  topic, partition, leaderIsrAndControllerEpoch,                                                                  replicaAssignment)            case None => // new leader request will be sent to this replica when one gets elected          }          replicaState.put(partitionAndReplica, NewReplica) // 更新replica state        case ReplicaDeletionStarted =>          assertValidPreviousStates(partitionAndReplica, List(OfflineReplica), targetState)          replicaState.put(partitionAndReplica, ReplicaDeletionStarted)          // send stop replica command          brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = true,            callbacks.stopReplicaResponseCallback)        case ReplicaDeletionIneligible =>          assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionStarted), targetState)          replicaState.put(partitionAndReplica, ReplicaDeletionIneligible)        case ReplicaDeletionSuccessful =>          assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionStarted), targetState)          replicaState.put(partitionAndReplica, ReplicaDeletionSuccessful)        case NonExistentReplica =>          assertValidPreviousStates(partitionAndReplica, List(ReplicaDeletionSuccessful), targetState)          // remove this replica from the assigned replicas list for its partition          val currentAssignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition)          controllerContext.partitionReplicaAssignment.put(topicAndPartition, currentAssignedReplicas.filterNot(_ == replicaId))          replicaState.remove(partitionAndReplica)        case OnlineReplica =>          assertValidPreviousStates(partitionAndReplica,            List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionIneligible), targetState)          replicaState(partitionAndReplica) match {            case NewReplica =>              // add this replica to the assigned replicas list for its partition              val currentAssignedReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition)              if(!currentAssignedReplicas.contains(replicaId))                controllerContext.partitionReplicaAssignment.put(topicAndPartition, currentAssignedReplicas :+ replicaId) // 加到AR中            case _ =>              // check if the leader for this partition ever existed              controllerContext.partitionLeadershipInfo.get(topicAndPartition) match {                case Some(leaderIsrAndControllerEpoch) =>                  brokerRequestBatch.addLeaderAndIsrRequestForBrokers(List(replicaId), topic, partition, leaderIsrAndControllerEpoch,                    replicaAssignment)                  replicaState.put(partitionAndReplica, OnlineReplica)                case None => // that means the partition was never in OnlinePartition state, this means the broker never                  // started a log for that partition and does not have a high watermark value for this partition              }          }          replicaState.put(partitionAndReplica, OnlineReplica)        case OfflineReplica =>          assertValidPreviousStates(partitionAndReplica,            List(NewReplica, OnlineReplica, OfflineReplica, ReplicaDeletionIneligible), targetState)          // send stop replica command to the replica so that it stops fetching from the leader          brokerRequestBatch.addStopReplicaRequestForBrokers(List(replicaId), topic, partition, deletePartition = false) // stop replica          // As an optimization, the controller removes dead replicas from the ISR          val leaderAndIsrIsEmpty: Boolean =            controllerContext.partitionLeadershipInfo.get(topicAndPartition) match {              case Some(currLeaderIsrAndControllerEpoch) =>                controller.removeReplicaFromIsr(topic, partition, replicaId) match {                  case Some(updatedLeaderIsrAndControllerEpoch) =>                    // send the shrunk ISR state change request only to the leader                    brokerRequestBatch.addLeaderAndIsrRequestForBrokers(List(updatedLeaderIsrAndControllerEpoch.leaderAndIsr.leader), // 修改ISR                      topic, partition, updatedLeaderIsrAndControllerEpoch, replicaAssignment)                    replicaState.put(partitionAndReplica, OfflineReplica)                    false                  case None =>                    true                }              case None =>                true            }        }    }    catch {    }  }

KafkaController

KafkaController中定义,

partition和replica的状态机
controllerElector, controller作为master可以简单的select partition的leader,但如果controller dead,需要重新在brokers中选取controller,这就需要基于ZK的elect算法
一系列的partition leader的selector用于在不用情况下select partition的leader

class KafkaController(val config : KafkaConfig, zkClient: ZkClient) extends Logging with KafkaMetricsGroup with KafkaControllerMBean {  val controllerContext = new ControllerContext(zkClient, config.zkSessionTimeoutMs)  val partitionStateMachine = new PartitionStateMachine(this) // partition状态机  val replicaStateMachine = new ReplicaStateMachine(this) // replica状态机  private val controllerElector = new ZookeeperLeaderElector(controllerContext, ZkUtils.ControllerPath, onControllerFailover,    onControllerResignation, config.brokerId) // 基于ZK的ephemeral node的elector,用于选举controller  // have a separate scheduler for the controller to be able to start and stop independently of the  // kafka server  private val autoRebalanceScheduler = new KafkaScheduler(1)  var deleteTopicManager: TopicDeletionManager = null  val offlinePartitionSelector = new OfflinePartitionLeaderSelector(controllerContext)  private val reassignedPartitionLeaderSelector = new ReassignedPartitionLeaderSelector(controllerContext)  private val preferredReplicaPartitionLeaderSelector = new PreferredReplicaPartitionLeaderSelector(controllerContext)  private val controlledShutdownPartitionLeaderSelector = new ControlledShutdownLeaderSelector(controllerContext)  private val brokerRequestBatch = new ControllerBrokerRequestBatch(this) // 用于broker通信  registerControllerChangedListener()}

下面具体看下Controller上面实现的接口,包含对broker,controller,partition的系列操作

onBrokerStartup:

1. send UpdateMetadata requests for all partitions to newly started brokers
2. replicas on the newly started broker –> OnlineReplica
3. partitions in OfflinePartition and NewPartition -> OnlinePartition (with OfflinePartitionLeaderSelector)
4. for partitions with replicas on newly started brokers, call onPartitionReassignment to complete any outstanding partition reassignment

/**   * This callback is invoked by the replica state machine's broker change listener, with the list of newly started   * brokers as input. It does the following -   * 1. Triggers the OnlinePartition state change for all new/offline partitions   * 2. It checks whether there are reassigned replicas assigned to any newly started brokers.  If   *    so, it performs the reassignment logic for each topic/partition.   *   * Note that we don't need to refresh the leader/isr cache for all topic/partitions at this point for two reasons:   * 1. The partition state machine, when triggering online state change, will refresh leader and ISR for only those   *    partitions currently new or offline (rather than every partition this controller is aware of)   * 2. Even if we do refresh the cache, there is no guarantee that by the time the leader and ISR request reaches   *    every broker that it is still valid.  Brokers check the leader epoch to determine validity of the request.   */  def onBrokerStartup(newBrokers: Seq[Int]) {    info("New broker startup callback for %s".format(newBrokers.mkString(",")))    val newBrokersSet = newBrokers.toSet    // send update metadata request for all partitions to the newly restarted brokers. In cases of controlled shutdown    // leaders will not be elected when a new broker comes up. So at least in the common controlled shutdown case, the    // metadata will reach the new brokers faster    sendUpdateMetadataRequest(newBrokers)    // the very first thing to do when a new broker comes up is send it the entire list of partitions that it is    // supposed to host. Based on that the broker starts the high watermark threads for the input list of partitions    val allReplicasOnNewBrokers = controllerContext.replicasOnBrokers(newBrokersSet)    replicaStateMachine.handleStateChanges(allReplicasOnNewBrokers, OnlineReplica)    // when a new broker comes up, the controller needs to trigger leader election for all new and offline partitions    // to see if these brokers can become leaders for some/all of those    partitionStateMachine.triggerOnlinePartitionStateChange()    // check if reassignment of some partitions need to be restarted    val partitionsWithReplicasOnNewBrokers = controllerContext.partitionsBeingReassigned.filter {      case (topicAndPartition, reassignmentContext) => reassignmentContext.newReplicas.exists(newBrokersSet.contains(_))    }    partitionsWithReplicasOnNewBrokers.foreach(p => onPartitionReassignment(p._1, p._2))    // check if topic deletion needs to be resumed. If at least one replica that belongs to the topic being deleted exists    // on the newly restarted brokers, there is a chance that topic deletion can resume    val replicasForTopicsToBeDeleted = allReplicasOnNewBrokers.filter(p => deleteTopicManager.isTopicQueuedUpForDeletion(p.topic))    if(replicasForTopicsToBeDeleted.size > 0) {      info(("Some replicas %s for topics scheduled for deletion %s are on the newly restarted brokers %s. " +        "Signaling restart of topic deletion for these topics").format(replicasForTopicsToBeDeleted.mkString(","),        deleteTopicManager.topicsToBeDeleted.mkString(","), newBrokers.mkString(",")))      deleteTopicManager.resumeDeletionForTopics(replicasForTopicsToBeDeleted.map(_.topic))    }  }

onBrokerFailure:

1. partitions w/o leader –> OfflinePartition
2. partitions in OfflinePartition and NewPartition -> OnlinePartition (with OfflinePartitionLeaderSelector)
3. each replica on the failed broker –> OfflineReplica

/**   * This callback is invoked by the replica state machine's broker change listener with the list of failed brokers   * as input. It does the following -   * 1. Mark partitions with dead leaders as offline   * 2. Triggers the OnlinePartition state change for all new/offline partitions   * 3. Invokes the OfflineReplica state change on the input list of newly started brokers   *   * Note that we don't need to refresh the leader/isr cache for all topic/partitions at this point.  This is because   * the partition state machine will refresh our cache for us when performing leader election for all new/offline   * partitions coming online.   */  def onBrokerFailure(deadBrokers: Seq[Int]) {    info("Broker failure callback for %s".format(deadBrokers.mkString(",")))    val deadBrokersThatWereShuttingDown =      deadBrokers.filter(id => controllerContext.shuttingDownBrokerIds.remove(id))    info("Removed %s from list of shutting down brokers.".format(deadBrokersThatWereShuttingDown))    val deadBrokersSet = deadBrokers.toSet    // trigger OfflinePartition state for all partitions whose current leader is one amongst the dead brokers    val partitionsWithoutLeader = controllerContext.partitionLeadershipInfo.filter(partitionAndLeader =>      deadBrokersSet.contains(partitionAndLeader._2.leaderAndIsr.leader) &&        !deleteTopicManager.isTopicQueuedUpForDeletion(partitionAndLeader._1.topic)).keySet    partitionStateMachine.handleStateChanges(partitionsWithoutLeader, OfflinePartition)    // trigger OnlinePartition state changes for offline or new partitions    partitionStateMachine.triggerOnlinePartitionStateChange()    // filter out the replicas that belong to topics that are being deleted    var allReplicasOnDeadBrokers = controllerContext.replicasOnBrokers(deadBrokersSet)    val activeReplicasOnDeadBrokers = allReplicasOnDeadBrokers.filterNot(p => deleteTopicManager.isTopicQueuedUpForDeletion(p.topic))    // handle dead replicas    replicaStateMachine.handleStateChanges(activeReplicasOnDeadBrokers, OfflineReplica)    // check if topic deletion state for the dead replicas needs to be updated    val replicasForTopicsToBeDeleted = allReplicasOnDeadBrokers.filter(p => deleteTopicManager.isTopicQueuedUpForDeletion(p.topic))    if(replicasForTopicsToBeDeleted.size > 0) {      // it is required to mark the respective replicas in TopicDeletionFailed state since the replica cannot be      // deleted when the broker is down. This will prevent the replica from being in TopicDeletionStarted state indefinitely      // since topic deletion cannot be retried until at least one replica is in TopicDeletionStarted state      deleteTopicManager.failReplicaDeletion(replicasForTopicsToBeDeleted)    }  }

shutdownBroker:

1. each partition whose leader is on shutdown broker -> OnlinePartition (ControlledShutdownPartitionLeaderSelector)
2. each replica on shutdown broker that is follower, send StopReplica request (w/o deletion)
3. each replica on shutdown broker that is follower -> OfflineReplica (to force shutdown replica out of the isr)

/**   * On clean shutdown, the controller first determines the partitions that the   * shutting down broker leads, and moves leadership of those partitions to another broker   * that is in that partition's ISR.   *   * @param id Id of the broker to shutdown.   * @return The number of partitions that the broker still leads.   */  def shutdownBroker(id: Int) : Set[TopicAndPartition] = {    controllerContext.brokerShutdownLock synchronized {      val allPartitionsAndReplicationFactorOnBroker: Set[(TopicAndPartition, Int)] =        inLock(controllerContext.controllerLock) {          controllerContext.partitionsOnBroker(id)            .map(topicAndPartition => (topicAndPartition, controllerContext.partitionReplicaAssignment(topicAndPartition).size))        }      allPartitionsAndReplicationFactorOnBroker.foreach {        case(topicAndPartition, replicationFactor) =>        // Move leadership serially to relinquish lock.        inLock(controllerContext.controllerLock) {          controllerContext.partitionLeadershipInfo.get(topicAndPartition).foreach { currLeaderIsrAndControllerEpoch =>            if (currLeaderIsrAndControllerEpoch.leaderAndIsr.leader == id) { // 对于leader的replica              // If the broker leads the topic partition, transition the leader and update isr. Updates zk and              // notifies all affected brokers              partitionStateMachine.handleStateChanges(Set(topicAndPartition), OnlinePartition, // 需要改变partition状态,虽然target state仍然是Online,但是leader变了                controlledShutdownPartitionLeaderSelector) // 策略为,New leader = replica in isr that's not being shutdown            }            else { // 对于follower的replica              // Stop the replica first. The state change below initiates ZK changes which should take some time              // before which the stop replica request should be completed (in most cases)              brokerRequestBatch.newBatch()              brokerRequestBatch.addStopReplicaRequestForBrokers(Seq(id), topicAndPartition.topic,  // Stop Replica                topicAndPartition.partition, deletePartition = false)              brokerRequestBatch.sendRequestsToBrokers(epoch, controllerContext.correlationId.getAndIncrement)              // If the broker is a follower, updates the isr in ZK and notifies the current leader              replicaStateMachine.handleStateChanges(Set(PartitionAndReplica(topicAndPartition.topic,                topicAndPartition.partition, id)), OfflineReplica) // 将replica状态设为OfflineReplica            }          }        }      }  }

 

onControllerFailover:

当通过zk electing出当前的broker为controller后,调用此函数做初始化工作
增加controller epoch
初始化controller context
startup channel manager,replica state machine和partition state machine
当初始化时发生任何问题, 都会放弃成为controller

/**   * This callback is invoked by the zookeeper leader elector on electing the current broker as the new controller.   * It does the following things on the become-controller state change -   * 1. Register controller epoch changed listener   * 2. Increments the controller epoch   * 3. Initializes the controller's context object that holds cache objects for current topics, live brokers and   *    leaders for all existing partitions.   * 4. Starts the controller's channel manager   * 5. Starts the replica state machine   * 6. Starts the partition state machine   * If it encounters any unexpected exception/error while becoming controller, it resigns as the current controller.   * This ensures another controller election will be triggered and there will always be an actively serving controller   */  def onControllerFailover() {    if(isRunning) {      // increment the controller epoch      incrementControllerEpoch(zkClient)      // before reading source of truth from zookeeper, register the listeners to get broker/topic callbacks      registerReassignedPartitionsListener()      registerPreferredReplicaElectionListener()      partitionStateMachine.registerListeners()      replicaStateMachine.registerListeners()      initializeControllerContext()      replicaStateMachine.startup()      partitionStateMachine.startup()      // register the partition change listeners for all existing topics on failover      controllerContext.allTopics.foreach(topic => partitionStateMachine.registerPartitionChangeListener(topic))      maybeTriggerPartitionReassignment()      maybeTriggerPreferredReplicaElection()      /* send partition leadership info to all live brokers */      sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)      if (config.autoLeaderRebalanceEnable) {        info("starting the partition rebalance scheduler")        autoRebalanceScheduler.startup()        autoRebalanceScheduler.schedule("partition-rebalance-thread", checkAndTriggerPartitionRebalance,          5, config.leaderImbalanceCheckIntervalSeconds, TimeUnit.SECONDS)      }      deleteTopicManager.start()    }    else      info("Controller has been shut down, aborting startup/failover")  }

 

onNewTopicCreation:

1. call onNewPartitionCreation

/**   * This callback is invoked by the partition state machine's topic change listener with the list of new topics   * and partitions as input. It does the following -   * 1. Registers partition change listener. This is not required until KAFKA-347   * 2. Invokes the new partition callback   * 3. Send metadata request with the new topic to all brokers so they allow requests for that topic to be served   */  def onNewTopicCreation(topics: Set[String], newPartitions: Set[TopicAndPartition]) {    info("New topic creation callback for %s".format(newPartitions.mkString(",")))    // subscribe to partition changes    topics.foreach(topic => partitionStateMachine.registerPartitionChangeListener(topic))    onNewPartitionCreation(newPartitions)  }

onNewPartitionCreation:

1. new partitions –> NewPartition
2. all replicas of new partitions –> NewReplica
3. new partitions –> OnlinePartition
4. all replicas of new partitions –> OnlineReplica

/**   * This callback is invoked by the topic change callback with the list of failed brokers as input.   * It does the following -   * 1. Move the newly created partitions to the NewPartition state   * 2. Move the newly created partitions from NewPartition->OnlinePartition state   */  def onNewPartitionCreation(newPartitions: Set[TopicAndPartition]) {    info("New partition creation callback for %s".format(newPartitions.mkString(",")))    partitionStateMachine.handleStateChanges(newPartitions, NewPartition)    replicaStateMachine.handleStateChanges(controllerContext.replicasForPartition(newPartitions), NewReplica)    partitionStateMachine.handleStateChanges(newPartitions, OnlinePartition, offlinePartitionSelector)    replicaStateMachine.handleStateChanges(controllerContext.replicasForPartition(newPartitions), OnlineReplica)  }

 

onPartitionReassignment: (OAR: old assigned replicas; RAR: new re-assigned replicas when reassignment completes)

注释写的很详细,参考其中的例子
1. 在ZK中,将AR更新为OAR + RAR,由于在reassignment的过程中所有这些replicas都可以被认为是assign给该partition
2. 给every replica in OAR + RAR发送LeaderAndIsr request,更新ISR为OAR + RAR
3. 对于RAR - OAR,创建新的replicas,并把replica状态设为NewReplica
4. 等待所有RAR中的replicas和leader完成同步
5. 将RAR中的所有replicas设为OnlineReplica
6. 在内存中,将AR设为RAR,但Zk里面仍然是OAR + RAR
7. 如果RAR中没有leader,elect一个新的leader
8. 将OAR - RAR中的设为OfflineReplica. shrink isr以remove OAR - RAR,并发送LeaderAndIsr通知leader,最后发送StopReplica(delete = false)
9. 将OAR - RAR中的设为NonExistentReplica
10. 在ZK中,将AR设为RAR,代表reassignment完成

/**   * This callback is invoked by the reassigned partitions listener. When an admin command initiates a partition   * reassignment, it creates the /admin/reassign_partitions path that triggers the zookeeper listener.   * Reassigning replicas for a partition goes through a few steps listed in the code.   * RAR = Reassigned replicas   * OAR = Original list of replicas for partition   * AR = current assigned replicas   *   * 1. Update AR in ZK with OAR + RAR.   * 2. Send LeaderAndIsr request to every replica in OAR + RAR (with AR as OAR + RAR). We do this by forcing an update   *    of the leader epoch in zookeeper.   * 3. Start new replicas RAR - OAR by moving replicas in RAR - OAR to NewReplica state.   * 4. Wait until all replicas in RAR are in sync with the leader.   * 5  Move all replicas in RAR to OnlineReplica state.   * 6. Set AR to RAR in memory.   * 7. If the leader is not in RAR, elect a new leader from RAR. If new leader needs to be elected from RAR, a LeaderAndIsr   *    will be sent. If not, then leader epoch will be incremented in zookeeper and a LeaderAndIsr request will be sent.   *    In any case, the LeaderAndIsr request will have AR = RAR. This will prevent the leader from adding any replica in   *    RAR - OAR back in the isr.   * 8. Move all replicas in OAR - RAR to OfflineReplica state. As part of OfflineReplica state change, we shrink the   *    isr to remove OAR - RAR in zookeeper and sent a LeaderAndIsr ONLY to the Leader to notify it of the shrunk isr.   *    After that, we send a StopReplica (delete = false) to the replicas in OAR - RAR.   * 9. Move all replicas in OAR - RAR to NonExistentReplica state. This will send a StopReplica (delete = false) to   *    the replicas in OAR - RAR to physically delete the replicas on disk.   * 10. Update AR in ZK with RAR.   * 11. Update the /admin/reassign_partitions path in ZK to remove this partition.   * 12. After electing leader, the replicas and isr information changes. So resend the update metadata request to every broker.   *   * For example, if OAR = {1, 2, 3} and RAR = {4,5,6}, the values in the assigned replica (AR) and leader/isr path in ZK   * may go through the following transition.   * AR                 leader/isr   * {1,2,3}            1/{1,2,3}           (initial state)   * {1,2,3,4,5,6}      1/{1,2,3}           (step 2)   * {1,2,3,4,5,6}      1/{1,2,3,4,5,6}     (step 4)   * {1,2,3,4,5,6}      4/{1,2,3,4,5,6}     (step 7)   * {1,2,3,4,5,6}      4/{4,5,6}           (step 8)   * {4,5,6}            4/{4,5,6}           (step 10)   *   * Note that we have to update AR in ZK with RAR last since it's the only place where we store OAR persistently.   * This way, if the controller crashes before that step, we can still recover.   */  def onPartitionReassignment(topicAndPartition: TopicAndPartition, reassignedPartitionContext: ReassignedPartitionsContext) {    val reassignedReplicas = reassignedPartitionContext.newReplicas    areReplicasInIsr(topicAndPartition.topic, topicAndPartition.partition, reassignedReplicas) match {      case false =>        info("New replicas %s for partition %s being ".format(reassignedReplicas.mkString(","), topicAndPartition) +          "reassigned not yet caught up with the leader")        val newReplicasNotInOldReplicaList = reassignedReplicas.toSet -- controllerContext.partitionReplicaAssignment(topicAndPartition).toSet        val newAndOldReplicas = (reassignedPartitionContext.newReplicas ++ controllerContext.partitionReplicaAssignment(topicAndPartition)).toSet        //1. Update AR in ZK with OAR + RAR.        updateAssignedReplicasForPartition(topicAndPartition, newAndOldReplicas.toSeq)        //2. Send LeaderAndIsr request to every replica in OAR + RAR (with AR as OAR + RAR).        updateLeaderEpochAndSendRequest(topicAndPartition, controllerContext.partitionReplicaAssignment(topicAndPartition),          newAndOldReplicas.toSeq)        //3. replicas in RAR - OAR -> NewReplica        startNewReplicasForReassignedPartition(topicAndPartition, reassignedPartitionContext, newReplicasNotInOldReplicaList)        info("Waiting for new replicas %s for partition %s being ".format(reassignedReplicas.mkString(","), topicAndPartition) +          "reassigned to catch up with the leader")      case true =>        //4. Wait until all replicas in RAR are in sync with the leader.        val oldReplicas = controllerContext.partitionReplicaAssignment(topicAndPartition).toSet -- reassignedReplicas.toSet        //5. replicas in RAR -> OnlineReplica        reassignedReplicas.foreach { replica =>          replicaStateMachine.handleStateChanges(Set(new PartitionAndReplica(topicAndPartition.topic, topicAndPartition.partition,            replica)), OnlineReplica)        }        //6. Set AR to RAR in memory.        //7. Send LeaderAndIsr request with a potential new leader (if current leader not in RAR) and        //   a new AR (using RAR) and same isr to every broker in RAR        moveReassignedPartitionLeaderIfRequired(topicAndPartition, reassignedPartitionContext)        //8. replicas in OAR - RAR -> Offline (force those replicas out of isr)        //9. replicas in OAR - RAR -> NonExistentReplica (force those replicas to be deleted)        stopOldReplicasOfReassignedPartition(topicAndPartition, reassignedPartitionContext, oldReplicas)        //10. Update AR in ZK with RAR.        updateAssignedReplicasForPartition(topicAndPartition, reassignedReplicas)        //11. Update the /admin/reassign_partitions path in ZK to remove this partition.        removePartitionFromReassignedPartitions(topicAndPartition)        info("Removed partition %s from the list of reassigned partitions in zookeeper".format(topicAndPartition))        controllerContext.partitionsBeingReassigned.remove(topicAndPartition)        //12. After electing leader, the replicas and isr information changes, so resend the update metadata request to every broker        sendUpdateMetadataRequest(controllerContext.liveOrShuttingDownBrokerIds.toSeq)        // signal delete topic thread if reassignment for some partitions belonging to topics being deleted just completed        deleteTopicManager.resumeDeletionForTopics(Set(topicAndPartition.topic))    }  }

转载地址:http://agdkx.baihongyu.com/

你可能感兴趣的文章
Mongodb启动命令mongod参数说明
查看>>
TCP&UDP压力测试工具
查看>>
oracle 导入数据
查看>>
首个5G智慧机场落地广州 速度是4G的50倍
查看>>
Android 最简单的自定义Dialog之一
查看>>
磨刀不误砍柴 - 配置适合工作学习的桌面环境
查看>>
Java笔记-反射机制(一)
查看>>
redux v3.7.2源码解读与学习之 applyMiddleware
查看>>
【React】为什么我不再使用setState?
查看>>
Git原理与高级使用(3)
查看>>
从JDK源码看Writer
查看>>
Express 结合 Webpack 实现HMRwi
查看>>
基于protobuf的RPC实现
查看>>
坚信每个人都能成为品牌
查看>>
JAVA的对象复制
查看>>
打开Office报错
查看>>
我的友情链接
查看>>
AsyncTask简易使用
查看>>
关于PHP sessions的超时设置
查看>>
HAProxy负载均衡原理及企业级实例部署haproxy集群
查看>>