• Flink 1.13 源码解析——Flink 作业提交流程


    ​点击这里查看 Flink 1.13 源码解析 目录汇总

    点击查看相关章节 Flink 1.13 源码解析——JobManager启动流程 WebMonitorEndpoint启动

    Flink 1.13 源码解析——Flink作业提交流程 下

    目录

    前言

    一、Flink Job 提交以及运行的前置工作

    二、Flink Job的提交

    2.1、StreamExecutionEnvironment的构建

    2.2、算子的构建

    2.3、env.execute方法的实现

    总结


    前言

            在之前的章节里我们针对Flink集群主、从节点的启动分别进行了源码级别的分析。我们总说Flink可以将一个编写好的代码,构建成高级抽象,那么高级抽象是什么?我认为是:针对一个任意类型数据任意类型计算逻辑任务复杂和数据规模的计算应用程序编程模型的抽象!

            在接下来的几章中,我们将分析Flink作业的提交流程、Flink JobMaster与JobManager的交互以及Flink StreamGraph、JobGraph、ExecutionGraph的构建和转换流程。本章将分析Flink作业的提交流程。

    一、Flink Job 提交以及运行的前置工作

            首先我们在提交Flink job的时候会执行flink run命令,此时会执行flink.sh脚本,我们通过该脚本文件中的以下内容确定flink中的入口类是什么:

    exec "${JAVA_RUN}" $JVM_ARGS $FLINK_ENV_JAVA_OPTS "${log_setting[@]}" -classpath "`manglePathList "$CC_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" org.apache.flink.client.cli.CliFrontend "$@"

    接下来我们去看这个org.apache.flink.client.cli.CliFrontend类的main方法:

    1. public static void main(final String[] args) {
    2. EnvironmentInformation.logEnvironmentInfo(LOG, "Command Line Client", args);
    3. // 1. find the configuration directory
    4. // TODO 获取配置目录
    5. final String configurationDirectory = getConfigurationDirectoryFromEnv();
    6. // 2. load the global configuration
    7. // TODO 解析配置文件 flink-conf
    8. final Configuration configuration =
    9. GlobalConfiguration.loadConfiguration(configurationDirectory);
    10. // 3. load the custom command lines
    11. // TODO 构造解析args命令行的对象,里面构建了三种对象
    12. final List customCommandLines =
    13. loadCustomCommandLines(configuration, configurationDirectory);
    14. int retCode = 31;
    15. try {
    16. final CliFrontend cli = new CliFrontend(configuration, customCommandLines);
    17. // TODO
    18. SecurityUtils.install(new SecurityConfiguration(cli.configuration));
    19. retCode = SecurityUtils.getInstalledContext().runSecured(() -> cli.parseAndRun(args));
    20. } catch (Throwable t) {
    21. final Throwable strippedThrowable =
    22. ExceptionUtils.stripException(t, UndeclaredThrowableException.class);
    23. LOG.error("Fatal error while running command line interface.", strippedThrowable);
    24. strippedThrowable.printStackTrace();
    25. } finally {
    26. System.exit(retCode);
    27. }
    28. }

    可以看到在这里做了以下一些工作:

    1、获取配置文件路径

    2、解析配置文件,构建configuration对象

    3、将命令行参数构建为CustomCommandLine集合

    4、解析命令行参数并执行

    我们来看最重要的一步:解析命令行参数并执行,点进 cli.parseAndRun(args)方法:

    1. public int parseAndRun(String[] args) {
    2. // check for action
    3. // TODO 检查命令参数正确性
    4. if (args.length < 1) {
    5. CliFrontendParser.printHelp(customCommandLines);
    6. System.out.println("Please specify an action.");
    7. return 1;
    8. }
    9. // get action
    10. // TODO 从命令行flink 后面的参数解析要执行的动作,例如flink run,动作就是run
    11. String action = args[0];
    12. // remove action from parameters
    13. final String[] params = Arrays.copyOfRange(args, 1, args.length);
    14. try {
    15. // do action
    16. switch (action) {
    17. case ACTION_RUN:
    18. // TODO 如果是run
    19. run(params);
    20. return 0;
    21. case ACTION_RUN_APPLICATION:
    22. runApplication(params);
    23. return 0;
    24. case ...
    25. case "-h":
    26. case "--help":
    27. CliFrontendParser.printHelp(customCommandLines);
    28. return 0;
    29. case "-v":
    30. case "--version":
    31. String version = EnvironmentInformation.getVersion();
    32. String commitID = EnvironmentInformation.getRevisionInformation().commitId;
    33. System.out.print("Version: " + version);
    34. System.out.println(
    35. commitID.equals(EnvironmentInformation.UNKNOWN)
    36. ? ""
    37. : ", Commit ID: " + commitID);
    38. return 0;
    39. default:
    40. System.out.printf("\"%s\" is not a valid action.\n", action);
    41. ... ...
    42. return 1;
    43. }
    44. } catch (....) {
    45. ...
    46. }
    47. }

            从上面这段代码可以看出,在这里解析命令行的第一个参数,我们以run命令为例,当解析到flink后面跟着的命令参数为run是调用 run(params),我们点进来继续看:

    1. protected void run(String[] args) throws Exception {
    2. LOG.info("Running 'run' command.");
    3. final Options commandOptions = CliFrontendParser.getRunCommandOptions();
    4. // TODO 真正开始解析命令行参数
    5. final CommandLine commandLine = getCommandLine(commandOptions, args, true);
    6. // evaluate help flag
    7. // TODO 如果是 flink -h,则打印flink帮助文档
    8. if (commandLine.hasOption(HELP_OPTION.getOpt())) {
    9. CliFrontendParser.printHelpForRun(customCommandLines);
    10. return;
    11. }
    12. final CustomCommandLine activeCommandLine =
    13. validateAndGetActiveCommandLine(checkNotNull(commandLine));
    14. // 创建程序参数对象
    15. final ProgramOptions programOptions = ProgramOptions.create(commandLine);
    16. // TODO 获取job的jar包和其他依赖jar
    17. final List jobJars = getJobJarAndDependencies(programOptions);
    18. // TODO 将解析出来的参数封装为配置对象
    19. final Configuration effectiveConfiguration =
    20. getEffectiveConfiguration(activeCommandLine, commandLine, programOptions, jobJars);
    21. LOG.debug("Effective executor configuration: {}", effectiveConfiguration);
    22. // TODO 获取打包的程序
    23. try (PackagedProgram program = getPackagedProgram(programOptions, effectiveConfiguration)) {
    24. // TODO 执行程序
    25. executeProgram(effectiveConfiguration, program);
    26. }
    27. }

    在这里开始解析所有的命令行参数,如果判断命令行里带有-h的命令则打印帮助日志,然后再根据命令参数创建程序参数对象、获取job所依赖的jar以及job本身的jar,然后将解析出来的各种配置、参数封装为一个配置对象,并根据这个对象构建我们的job程序,最后执行该程序,我们看executeProgram方法时如何执行这个程序的:

    1. protected void executeProgram(final Configuration configuration, final PackagedProgram program)
    2. throws ProgramInvocationException {
    3. // TODO
    4. ClientUtils.executeProgram(
    5. new DefaultExecutorServiceLoader(), configuration, program, false, false);
    6. }

    我们再点进ClientUtils.executeProgram方法里:

    1. public static void executeProgram(
    2. PipelineExecutorServiceLoader executorServiceLoader,
    3. Configuration configuration,
    4. PackagedProgram program,
    5. boolean enforceSingleJobExecution,
    6. boolean suppressSysout)
    7. throws ProgramInvocationException {
    8. checkNotNull(executorServiceLoader);
    9. final ClassLoader userCodeClassLoader = program.getUserCodeClassLoader();
    10. final ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
    11. try {
    12. Thread.currentThread().setContextClassLoader(userCodeClassLoader);
    13. LOG.info(
    14. "Starting program (detached: {})",
    15. !configuration.getBoolean(DeploymentOptions.ATTACHED));
    16. // TODO 配置执行环境
    17. ContextEnvironment.setAsContext(
    18. executorServiceLoader,
    19. configuration,
    20. userCodeClassLoader,
    21. enforceSingleJobExecution,
    22. suppressSysout);
    23. StreamContextEnvironment.setAsContext(
    24. executorServiceLoader,
    25. configuration,
    26. userCodeClassLoader,
    27. enforceSingleJobExecution,
    28. suppressSysout);
    29. try {
    30. // TODO 真正提交执行
    31. program.invokeInteractiveModeForExecution();
    32. } finally {
    33. ContextEnvironment.unsetAsContext();
    34. StreamContextEnvironment.unsetAsContext();
    35. }
    36. } finally {
    37. Thread.currentThread().setContextClassLoader(contextClassLoader);
    38. }
    39. }

    在这个方法里,首先根据传入的配置对象开始配置执行环境,最后通过 program.invokeInteractiveModeForExecution()正式开始执行Job,我们进入这个方法:

    1. public void invokeInteractiveModeForExecution() throws ProgramInvocationException {
    2. FlinkSecurityManager.monitorUserSystemExitForCurrentThread();
    3. try {
    4. // TODO 调用自己编写的应用程序的main方法
    5. callMainMethod(mainClass, args);
    6. } finally {
    7. FlinkSecurityManager.unmonitorUserSystemExitForCurrentThread();
    8. }
    9. }

    再进入callMainMethod(mainClass, args)方法:

    1. private static void callMainMethod(Class entryClass, String[] args)
    2. throws ProgramInvocationException {
    3. Method mainMethod;
    4. if (!Modifier.isPublic(entryClass.getModifiers())) {
    5. throw new ProgramInvocationException(
    6. "The class " + entryClass.getName() + " must be public.");
    7. }
    8. // TODO 反射拿到main的实例
    9. try {
    10. mainMethod = entryClass.getMethod("main", String[].class);
    11. } catch (NoSuchMethodException e) {
    12. throw new ProgramInvocationException(
    13. "The class " + entryClass.getName() + " has no main(String[]) method.");
    14. } catch (Throwable t) {
    15. throw new ProgramInvocationException(
    16. "Could not look up the main(String[]) method from the class "
    17. + entryClass.getName()
    18. + ": "
    19. + t.getMessage(),
    20. t);
    21. }
    22. if (!Modifier.isStatic(mainMethod.getModifiers())) {
    23. throw new ProgramInvocationException(
    24. "The class " + entryClass.getName() + " declares a non-static main method.");
    25. }
    26. if (!Modifier.isPublic(mainMethod.getModifiers())) {
    27. throw new ProgramInvocationException(
    28. "The class " + entryClass.getName() + " declares a non-public main method.");
    29. }
    30. try {
    31. // TODO 调用main 方法
    32. mainMethod.invoke(null, (Object) args);
    33. } catch (IllegalArgumentException e) {
    34. throw new ProgramInvocationException(
    35. "Could not invoke the main method, arguments are not matching.", e);
    36. } catch (IllegalAccessException e) {
    37. throw new ProgramInvocationException(
    38. "Access to the main method was denied: " + e.getMessage(), e);
    39. } catch (InvocationTargetException e) {
    40. Throwable exceptionInMethod = e.getTargetException();
    41. if (exceptionInMethod instanceof Error) {
    42. throw (Error) exceptionInMethod;
    43. } else if (exceptionInMethod instanceof ProgramParametrizationException) {
    44. throw (ProgramParametrizationException) exceptionInMethod;
    45. } else if (exceptionInMethod instanceof ProgramInvocationException) {
    46. throw (ProgramInvocationException) exceptionInMethod;
    47. } else {
    48. throw new ProgramInvocationException(
    49. "The main method caused an error: " + exceptionInMethod.getMessage(),
    50. exceptionInMethod);
    51. }
    52. } catch (Throwable t) {
    53. throw new ProgramInvocationException(
    54. "An error occurred while invoking the program's main method: " + t.getMessage(),
    55. t);
    56. }
    57. }

    可以看到在这里,通过反射拿到了我们主程序jar的main方法实例,再通过mainMethod.invoke来执行我们的main方法,到这里前置准备工作就完成了,接下来我们看在我们的主程序中是如何将我们编写的逻辑提交执行的。

    二、Flink Job的提交

    2.1、StreamExecutionEnvironment的构建

            我们以Flink-example-streaming中提供的案例来举例,也正如我们平时编写Flink作业时的编程模型,首先我们来看Flink入口类的构建流程:

    1. /* TODO
    2. 1. 初始化得到StateBackend
    3. 2. 解析所有checkpoint相关配置
    4. */
    5. final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    我们点进getExecutionEnvironment方法,再点进getExecutionEnvironment:

    1. public static StreamExecutionEnvironment getExecutionEnvironment(Configuration configuration) {
    2. return Utils.resolveFactory(threadLocalContextEnvironmentFactory, contextEnvironmentFactory)
    3. // TODO 构建StreamExecutionEnvironment
    4. .map(factory -> factory.createExecutionEnvironment(configuration))
    5. .orElseGet(() -> StreamExecutionEnvironment.createLocalEnvironment(configuration));
    6. }

    我们再次进入factory.createExecutionEnvironment方法,选择StreamContextEnvironment实现:

    1. public static void setAsContext(
    2. final PipelineExecutorServiceLoader executorServiceLoader,
    3. final Configuration configuration,
    4. final ClassLoader userCodeClassLoader,
    5. final boolean enforceSingleJobExecution,
    6. final boolean suppressSysout) {
    7. StreamExecutionEnvironmentFactory factory =
    8. conf -> {
    9. Configuration mergedConfiguration = new Configuration();
    10. mergedConfiguration.addAll(configuration);
    11. mergedConfiguration.addAll(conf);
    12. // TODO 初始化StreamContextEnvironment
    13. return new StreamContextEnvironment(
    14. executorServiceLoader,
    15. mergedConfiguration,
    16. userCodeClassLoader,
    17. enforceSingleJobExecution,
    18. suppressSysout);
    19. };
    20. initializeContextEnvironment(factory);
    21. }

    可以看到,在这里初始化了StreamContextEnvironment,我们点进他的构造方法,选择父类的构造方法:

    1. @PublicEvolving
    2. public StreamExecutionEnvironment(
    3. final PipelineExecutorServiceLoader executorServiceLoader,
    4. final Configuration configuration,
    5. final ClassLoader userClassloader) {
    6. this.executorServiceLoader = checkNotNull(executorServiceLoader);
    7. this.configuration = new Configuration(checkNotNull(configuration));
    8. this.userClassloader =
    9. userClassloader == null ? getClass().getClassLoader() : userClassloader;
    10. /*
    11. TODO 进行各种组件配置
    12. 1.初始化得到StateBackend
    13. 2.初始化checkpoint相关参数
    14. */
    15. this.configure(this.configuration, this.userClassloader);
    16. }

    我们在点进this.configure方法:

    1. @PublicEvolving
    2. public void configure(ReadableConfig configuration, ClassLoader classLoader) {
    3. configuration
    4. .getOptional(StreamPipelineOptions.TIME_CHARACTERISTIC)
    5. .ifPresent(this::setStreamTimeCharacteristic);
    6. // TODO 加载得到StateBackEnd
    7. Optional.ofNullable(loadStateBackend(configuration, classLoader))
    8. .ifPresent(this::setStateBackend);
    9. configuration
    10. .getOptional(PipelineOptions.OPERATOR_CHAINING)
    11. .ifPresent(c -> this.isChainingEnabled = c);
    12. configuration
    13. .getOptional(ExecutionOptions.BUFFER_TIMEOUT)
    14. .ifPresent(t -> this.setBufferTimeout(t.toMillis()));
    15. configuration
    16. .getOptional(DeploymentOptions.JOB_LISTENERS)
    17. .ifPresent(listeners -> registerCustomListeners(classLoader, listeners));
    18. configuration
    19. .getOptional(PipelineOptions.CACHED_FILES)
    20. .ifPresent(
    21. f -> {
    22. this.cacheFile.clear();
    23. this.cacheFile.addAll(DistributedCache.parseCachedFilesFromString(f));
    24. });
    25. configuration
    26. .getOptional(ExecutionOptions.RUNTIME_MODE)
    27. .ifPresent(
    28. runtimeMode ->
    29. this.configuration.set(ExecutionOptions.RUNTIME_MODE, runtimeMode));
    30. configuration
    31. .getOptional(ExecutionOptions.SORT_INPUTS)
    32. .ifPresent(
    33. sortInputs ->
    34. this.getConfiguration()
    35. .set(ExecutionOptions.SORT_INPUTS, sortInputs));
    36. configuration
    37. .getOptional(ExecutionOptions.USE_BATCH_STATE_BACKEND)
    38. .ifPresent(
    39. sortInputs ->
    40. this.getConfiguration()
    41. .set(ExecutionOptions.USE_BATCH_STATE_BACKEND, sortInputs));
    42. configuration
    43. .getOptional(PipelineOptions.NAME)
    44. .ifPresent(jobName -> this.getConfiguration().set(PipelineOptions.NAME, jobName));
    45. config.configure(configuration, classLoader);
    46. // TODO checkpoint相关参数的解析和配置
    47. /*
    48. TODO 1、从configuration对象中解析各种跟checkpoint有关的参数放置在CheckpointConfig对象中
    49. 2、将来解析各种算子,构造StreamGraph的时候,这个checkpointConfig会传递给StreamGraph
    50. 3、由StreamGraph去构造JobGraph的时候,会继续传递
    51. */
    52. checkpointCfg.configure(configuration);
    53. }

    可以看到这里做了两件主要的工作:

    1. 加载得到StateBackend
    2. 解析和配置checkpoint的相关参数
      1. 从configuration对象中解析各种跟checkpoint有关的参数放置在CheckpointConfig对象中
      2. 将来解析各种算子,构造StreamGraph的时候,这个checkpointConfig会传递给StreamGraph
      3. 由StreamGraph去构造JobGraph的时候,会继续传递

    我们首先来看状态后端的加载,点击loadStateBackend(configuration, classLoader)方法:

    1. private StateBackend loadStateBackend(ReadableConfig configuration, ClassLoader classLoader) {
    2. try {
    3. // TODO 获取配置中有关StateBackend的相关配置,构建StateBackend
    4. return StateBackendLoader.loadStateBackendFromConfig(configuration, classLoader, null);
    5. } catch (DynamicCodeLoadingException | IOException e) {
    6. throw new WrappingRuntimeException(e);
    7. }
    8. }

    在点进StateBackendLoader.loadStateBackendFromConfig方法:

    1. public static StateBackend loadStateBackendFromConfig(
    2. ReadableConfig config, ClassLoader classLoader, @Nullable Logger logger)
    3. throws IllegalConfigurationException, DynamicCodeLoadingException, IOException {
    4. checkNotNull(config, "config");
    5. checkNotNull(classLoader, "classLoader");
    6. // TODO 获取StateBackend的相关配置
    7. final StateBackend backend =
    8. loadUnwrappedStateBackendFromConfig(config, classLoader, logger);
    9. checkArgument(
    10. !(backend instanceof DelegatingStateBackend),
    11. "expecting non-delegating state backend");
    12. if (config.get(CheckpointingOptions.ENABLE_STATE_CHANGE_LOG) && (backend != null)) {
    13. return loadChangelogStateBackend(backend, classLoader);
    14. } else {
    15. return backend;
    16. }
    17. }

    可以看到,在这里获取了StateBackend的相关配置,并在最后将StateBackend对象返回了出去,我们点进loadUnwrappedStateBackendFromConfig方法:

    1. private static StateBackend loadUnwrappedStateBackendFromConfig(
    2. ReadableConfig config, ClassLoader classLoader, @Nullable Logger logger)
    3. throws IllegalConfigurationException, DynamicCodeLoadingException, IOException {
    4. checkNotNull(config, "config");
    5. checkNotNull(classLoader, "classLoader");
    6. final String backendName = config.get(StateBackendOptions.STATE_BACKEND);
    7. if (backendName == null) {
    8. return null;
    9. }
    10. // by default the factory class is the backend name
    11. String factoryClassName = backendName;
    12. switch (backendName.toLowerCase()) {
    13. case MEMORY_STATE_BACKEND_NAME:
    14. MemoryStateBackend backend =
    15. new MemoryStateBackendFactory().createFromConfig(config, classLoader);
    16. if (logger != null) {
    17. logger.warn(
    18. "MemoryStateBackend has been deprecated. Please use 'hashmap' state "
    19. + "backend instead with JobManagerCheckpointStorage for equivalent "
    20. + "functionality");
    21. logger.info("State backend is set to job manager {}", backend);
    22. }
    23. return backend;
    24. case FS_STATE_BACKEND_NAME:
    25. if (logger != null) {
    26. logger.warn(
    27. "{} state backend has been deprecated. Please use 'hashmap' state "
    28. + "backend instead.",
    29. backendName.toLowerCase());
    30. }
    31. // fall through and use the HashMapStateBackend instead which
    32. // utilizes the same HeapKeyedStateBackend runtime implementation.
    33. case HASHMAP_STATE_BACKEND_NAME:
    34. HashMapStateBackend hashMapStateBackend =
    35. new HashMapStateBackendFactory().createFromConfig(config, classLoader);
    36. if (logger != null) {
    37. logger.info("State backend is set to heap memory {}", hashMapStateBackend);
    38. }
    39. return hashMapStateBackend;
    40. // TODO 如果是rocksdb 则 RocksDB
    41. case ROCKSDB_STATE_BACKEND_NAME:
    42. factoryClassName = ROCKSDB_STATE_BACKEND_FACTORY;
    43. // fall through to the 'default' case that uses reflection to load the backend
    44. // that way we can keep RocksDB in a separate module
    45. default:
    46. ... ...
    47. return factory.createFromConfig(config, classLoader);
    48. }
    49. }

    可以看到这里对配置中的StateBackend相关配置进行了匹配,在Flink1.13版本以前,Flink支持三种状态后端的配置:

    1. JobManager
    2. filesystem
    3. rocksdb

    但从Flink1.13开始,状态后端只支持两种,一种是HashMap,一种是RocksDB。

    1、HashMap这种方式就是我们之前所说的,把状态存放在内存里。具体实现上,哈希表状态后端在内部会直接把状态当作对象(objects),保存在 Taskmanager 的 JVM 堆(heap)上。普通的状态,以及窗口中收集的数据和触发器(triggers),都会以键值对(key-value)的形式存储起来,所以底层是一个哈希表(HashMap),
    这种状态后端也因此得名。

    2、HashMapStateBackend 是内存计算,读写速度非常快;但是,状态的大小会受到集群可用内存的限制,如果应用的状态随着时间不停地增长,就会耗尽内存资源。而 RocksDB 是硬盘存储,所以可以根据可用的磁盘空间进行扩展,而且是唯一支持增量检查点的状态后端,所以它非常适合于超级海量状态的存储。不过由于每个状态的读写都需要做序列化/反序列化,而且可能需要直接从磁盘读取数据,这就会导致性能的降低,平均读写性能要比 HashMapStateBackend 慢一个数量级。

            到此为止,我们StreamExecutionEnvironment的构建就已经完成了,接下来将进入算子的构建环节。

    2.2、算子的构建

            在看代码之前,我们先来聊几个概念,也就是算子的转化过程。

            我们在算子内写的计算逻辑是一个Function,而算子的工作就是将这个Function封装为一个StreamOperator,最终StreamOperator也将被封装为一个Transformation,然后加入到env的Transformation的集合中,总结来说关系是这样: Function => StreamOperator => Transformation。中间还涉及到一个对象是DataStream,我们可以把DataStream看做是Function的载体,作为不同算子之间连接的桥梁。

            接下来我们来看代码,在StreamExecutionEnvironment构建完成之后,我们要开始进行Source的构建,正如代码中的:

    env.readTextFile(input)

    这个readTextFile也是一个算子,我们以他为例来看看算子内部所做的工作,我们点进这个readTextFile方法,来到这里:

    1. public DataStreamSource readTextFile(String filePath, String charsetName) {
    2. Preconditions.checkArgument(
    3. !StringUtils.isNullOrWhitespaceOnly(filePath),
    4. "The file path must not be null or blank.");
    5. TextInputFormat format = new TextInputFormat(new Path(filePath));
    6. format.setFilesFilter(FilePathFilter.createDefaultFilter());
    7. TypeInformation typeInfo = BasicTypeInfo.STRING_TYPE_INFO;
    8. format.setCharsetName(charsetName);
    9. // TODO
    10. return readFile(format, filePath, FileProcessingMode.PROCESS_ONCE, -1, typeInfo);
    11. }

    我们继续看readFile方法:

    1. @PublicEvolving
    2. public DataStreamSource readFile(
    3. FileInputFormat inputFormat,
    4. String filePath,
    5. FileProcessingMode watchType,
    6. long interval,
    7. TypeInformation typeInformation) {
    8. Preconditions.checkNotNull(inputFormat, "InputFormat must not be null.");
    9. Preconditions.checkArgument(
    10. !StringUtils.isNullOrWhitespaceOnly(filePath),
    11. "The file path must not be null or blank.");
    12. inputFormat.setFilePath(filePath);
    13. // TODO
    14. return createFileInput(
    15. inputFormat, typeInformation, "Custom File Source", watchType, interval);
    16. }

    再进入createFileInput方法:

    1. private DataStreamSource createFileInput(
    2. FileInputFormat inputFormat,
    3. TypeInformation typeInfo,
    4. String sourceName,
    5. FileProcessingMode monitoringMode,
    6. long interval) {
    7. Preconditions.checkNotNull(inputFormat, "Unspecified file input format.");
    8. Preconditions.checkNotNull(typeInfo, "Unspecified output type information.");
    9. Preconditions.checkNotNull(sourceName, "Unspecified name for the source.");
    10. Preconditions.checkNotNull(monitoringMode, "Unspecified monitoring mode.");
    11. Preconditions.checkArgument(
    12. monitoringMode.equals(FileProcessingMode.PROCESS_ONCE)
    13. || interval >= ContinuousFileMonitoringFunction.MIN_MONITORING_INTERVAL,
    14. "The path monitoring interval cannot be less than "
    15. + ContinuousFileMonitoringFunction.MIN_MONITORING_INTERVAL
    16. + " ms.");
    17. // TODO 生成一个function
    18. ContinuousFileMonitoringFunction monitoringFunction =
    19. new ContinuousFileMonitoringFunction<>(
    20. inputFormat, monitoringMode, getParallelism(), interval);
    21. ContinuousFileReaderOperatorFactory factory =
    22. new ContinuousFileReaderOperatorFactory<>(inputFormat);
    23. final Boundedness boundedness =
    24. monitoringMode == FileProcessingMode.PROCESS_ONCE
    25. ? Boundedness.BOUNDED
    26. : Boundedness.CONTINUOUS_UNBOUNDED;
    27. // TODO 生成DataStreamSource
    28. // TODO Function => StreamOperator => Transformation
    29. SingleOutputStreamOperator source =
    30. // TODO 将function封装为DataStream
    31. addSource(monitoringFunction, sourceName, null, boundedness)
    32. // TODO 执行转换再转为Transformation,将得到的Transformation加入 Transformations集合中
    33. .transform("Split Reader: " + sourceName, typeInfo, factory);
    34. return new DataStreamSource<>(source);
    35. }

    到这里,就能开始看到算子的转化步骤了:

    1. 通过new ContinuousFileMonitoringFunction初始化了一个Function对象
    2. 通过addSource(...)方法将Function封装为StreamOperator,
    3. 然后通过DataStream承载StreamOperator交给transform方法封装为Transformation,并交给env的Transformations集合

    我们首先来看Function封装为StreamOperator,并交给DataStream的过程,我们点进addSource(...)方法:

    1. private DataStreamSource addSource(
    2. final SourceFunction function,
    3. final String sourceName,
    4. @Nullable final TypeInformation typeInfo,
    5. final Boundedness boundedness) {
    6. checkNotNull(function);
    7. checkNotNull(sourceName);
    8. checkNotNull(boundedness);
    9. TypeInformation resolvedTypeInfo =
    10. getTypeInfo(function, sourceName, SourceFunction.class, typeInfo);
    11. boolean isParallel = function instanceof ParallelSourceFunction;
    12. clean(function);
    13. /*
    14. TODO 注意几个要点:
    15. 1.StreamSource 本身是一个StreamOperator
    16. 2.StreamSource 包装了Function
    17. 3. StreamSource作为成员变量被封装成一个Transformation
    18. 所以三者的关系: Function => StreamOperator => Transformation
    19. */
    20. final StreamSource sourceOperator = new StreamSource<>(function);
    21. return new DataStreamSource<>(
    22. this, resolvedTypeInfo, sourceOperator, isParallel, sourceName, boundedness);
    23. }

    可以看到,在这里将Function封装为了一个StreamSource,根据继承关系可以看到StreamSource本身就是一个StreamOperator。接下来又将构建好的StreamSource交给了DataStreamSource,DataStreamSource的顶级父类就是一个DataStream。

    在构建完成DataStream之后,我们来看Transform方法,点进来:

    1. @PublicEvolving
    2. public SingleOutputStreamOperator transform(
    3. String operatorName,
    4. TypeInformation outTypeInfo,
    5. OneInputStreamOperatorFactory operatorFactory) {
    6. // TODO
    7. return doTransform(operatorName, outTypeInfo, operatorFactory);
    8. }

    继续进入doTransform方法:

    1. protected SingleOutputStreamOperator doTransform(
    2. String operatorName,
    3. TypeInformation outTypeInfo,
    4. StreamOperatorFactory operatorFactory) {
    5. // read the output type of the input Transform to coax out errors about MissingTypeInfo
    6. transformation.getOutputType();
    7. // TODO 构建出了Transformation
    8. OneInputTransformation resultTransform =
    9. new OneInputTransformation<>(
    10. this.transformation,
    11. operatorName,
    12. operatorFactory,
    13. outTypeInfo,
    14. environment.getParallelism());
    15. @SuppressWarnings({"unchecked", "rawtypes"})
    16. SingleOutputStreamOperator returnStream =
    17. new SingleOutputStreamOperator(environment, resultTransform);
    18. // TODO 将封装了Transformation的StreamOperator加入Transformations集合
    19. getExecutionEnvironment().addOperator(resultTransform);
    20. return returnStream;
    21. }

    可以看到,在这个方法中,我们封装出了Transformation对象,并在方法尾部将Transformation对象添加进了env的Transformations集合中,然后又将我们这个Transformation对象封装进了DataStream里,继续往下游传输。

    到这里,readTextFile算子的构建就已经完成了,接下来我们继续来看下一个算子:

    1. DataStream> counts =
    2. // split up the lines in pairs (2-tuples) containing: (word,1)
    3. // TODO
    4. text.flatMap(new Tokenizer())
    5. // group by the tuple field "0" and sum up tuple field "1"
    6. .keyBy(value -> value.f0)
    7. .sum(1);

    我们首先进入flatMap算子中:

    1. // TODO 可以看到这里的入参为一个Function, 每一个StreamOperator都包含了一个Function
    2. public SingleOutputStreamOperator flatMap(FlatMapFunction flatMapper) {
    3. TypeInformation outType =
    4. TypeExtractor.getFlatMapReturnTypes(
    5. clean(flatMapper), getType(), Utils.getCallLocationName(), true);
    6. // TODO
    7. return flatMap(flatMapper, outType);
    8. }

    这里可以看到,我们将一个Function对象作为参数传入了进来,我们继续看flatMap方法:

    1. public SingleOutputStreamOperator flatMap(
    2. FlatMapFunction flatMapper, TypeInformation outputType) {
    3. // TODO
    4. return transform("Flat Map", outputType, new StreamFlatMap<>(clean(flatMapper)));
    5. }

    点进transform里:

    1. @PublicEvolving
    2. public SingleOutputStreamOperator transform(
    3. String operatorName,
    4. TypeInformation outTypeInfo,
    5. OneInputStreamOperator operator) {
    6. // TODO
    7. return doTransform(operatorName, outTypeInfo, SimpleOperatorFactory.of(operator));
    8. }

    再进入doTransformation:

    1. protected SingleOutputStreamOperator doTransform(
    2. String operatorName,
    3. TypeInformation outTypeInfo,
    4. StreamOperatorFactory operatorFactory) {
    5. // read the output type of the input Transform to coax out errors about MissingTypeInfo
    6. transformation.getOutputType();
    7. // TODO 构建出了Transformation
    8. OneInputTransformation resultTransform =
    9. new OneInputTransformation<>(
    10. this.transformation,
    11. operatorName,
    12. operatorFactory,
    13. outTypeInfo,
    14. environment.getParallelism());
    15. @SuppressWarnings({"unchecked", "rawtypes"})
    16. SingleOutputStreamOperator returnStream =
    17. new SingleOutputStreamOperator(environment, resultTransform);
    18. // TODO 将封装了Transformation的StreamOperator加入Transformations集合
    19. getExecutionEnvironment().addOperator(resultTransform);
    20. return returnStream;
    21. }

    可以看到,我们又回到了这里。基本上所有的算子都是这个处理逻辑,通过Function构建StreamOperator,再构建Transformation,在添加完Transformations集合后封装为DataStream返回继续传递给下游算子。

    2.3、env.execute方法的实现

            在完成了一系列算子的计算和转换之后,所有的算子也以Transformation的形式添加到了env的Transformations集合中,接下来我们来看env.execute方法的实现,我们点进这个方法:

    1. public JobExecutionResult execute(String jobName) throws Exception {
    2. Preconditions.checkNotNull(jobName, "Streaming Job name should not be null.");
    3. // TODO 获取到StreamGraph,并执行StreamGraph
    4. return execute(getStreamGraph(jobName));
    5. }

    可以看到,这里会构建一个StreamGraph,然后再去执行这个StreamGraph。关于StreamGraph的构建流程我将在下一章中详细分析,这里就先以job的提交流程为主,我们来看这个execute方法:

    1. @Internal
    2. public JobExecutionResult execute(StreamGraph streamGraph) throws Exception {
    3. // 异步执行StreamGraph
    4. final JobClient jobClient = executeAsync(streamGraph);
    5. try {
    6. final JobExecutionResult jobExecutionResult;
    7. if (configuration.getBoolean(DeploymentOptions.ATTACHED)) {
    8. // TODO 通过get方法阻塞等待StreamGraph的提交结果
    9. jobExecutionResult = jobClient.getJobExecutionResult().get();
    10. } else {
    11. jobExecutionResult = new DetachedJobExecutionResult(jobClient.getJobID());
    12. }
    13. jobListeners.forEach(
    14. jobListener -> jobListener.onJobExecuted(jobExecutionResult, null));
    15. return jobExecutionResult;
    16. } catch (Throwable t) {
    17. // get() on the JobExecutionResult Future will throw an ExecutionException. This
    18. // behaviour was largely not there in Flink versions before the PipelineExecutor
    19. // refactoring so we should strip that exception.
    20. Throwable strippedException = ExceptionUtils.stripExecutionException(t);
    21. jobListeners.forEach(
    22. jobListener -> {
    23. jobListener.onJobExecuted(null, strippedException);
    24. });
    25. ExceptionUtils.rethrowException(strippedException);
    26. // never reached, only make javac happy
    27. return null;
    28. }
    29. }

    可以看到,在这个方法里异步执行了这个StreamGraph,并在后面的代码中通过get方法阻塞等待StreamGraph的执行结果,我们来看StreamGraph的异步执行过程,点进executeAsync方法:

    1. @Internal
    2. public JobClient executeAsync(StreamGraph streamGraph) throws Exception {
    3. checkNotNull(streamGraph, "StreamGraph cannot be null.");
    4. checkNotNull(
    5. configuration.get(DeploymentOptions.TARGET),
    6. "No execution.target specified in your configuration file.");
    7. final PipelineExecutorFactory executorFactory =
    8. executorServiceLoader.getExecutorFactory(configuration);
    9. checkNotNull(
    10. executorFactory,
    11. "Cannot find compatible factory for specified execution.target (=%s)",
    12. configuration.get(DeploymentOptions.TARGET));
    13. /*
    14. TODO 异步提交得到future
    15. */
    16. CompletableFuture jobClientFuture =
    17. executorFactory
    18. .getExecutor(configuration)
    19. .execute(streamGraph, configuration, userClassloader);
    20. try {
    21. // TODO 阻塞获取StreamGraph的执行结果
    22. JobClient jobClient = jobClientFuture.get();
    23. jobListeners.forEach(jobListener -> jobListener.onJobSubmitted(jobClient, null));
    24. return jobClient;
    25. } catch (ExecutionException executionException) {
    26. final Throwable strippedException =
    27. ExceptionUtils.stripExecutionException(executionException);
    28. jobListeners.forEach(
    29. jobListener -> jobListener.onJobSubmitted(null, strippedException));
    30. throw new FlinkException(
    31. String.format("Failed to execute job '%s'.", streamGraph.getJobName()),
    32. strippedException);
    33. }
    34. }

    可以看到在这里使用了异步编程来提交StreamGraph,我们继续点进execute方法,选择AbstractSessionClusterExecutor实现:

    1. // TODO 此处的pipeline参数就是StreamGraph
    2. @Override
    3. public CompletableFuture execute(
    4. @Nonnull final Pipeline pipeline,
    5. @Nonnull final Configuration configuration,
    6. @Nonnull final ClassLoader userCodeClassloader)
    7. throws Exception {
    8. // TODO 通过StreamGraph构建JobGraph
    9. final JobGraph jobGraph = PipelineExecutorUtils.getJobGraph(pipeline, configuration);
    10. /*
    11. TODO 到此为止,JobGraph已经构建完成,接下来开始JobGraph的提交
    12. */
    13. // TODO
    14. try (final ClusterDescriptor clusterDescriptor =
    15. clusterClientFactory.createClusterDescriptor(configuration)) {
    16. final ClusterID clusterID = clusterClientFactory.getClusterId(configuration);
    17. checkState(clusterID != null);
    18. /*
    19. TODO 用于创建RestClusterClient的 Provider: ClusterClientProvider
    20. 1. 内部会初始化得到RestClusterClient
    21. 2. 初始化RestClusterClient的时候,会初始化他内部的成员变量: RestClient
    22. 3. 在初始化RestClient的时候,也会初始化他内部的一个netty客户端
    23. TODO 提交Job的客户端: RestClusterClient中的RestClient中的Netty客户端
    24. TODO 接受Job的服务端: JobManager中启动的WebMonitorEndpoint中的Netty 服务端
    25. */
    26. final ClusterClientProvider clusterClientProvider =
    27. clusterDescriptor.retrieve(clusterID);
    28. ClusterClient clusterClient = clusterClientProvider.getClusterClient();
    29. /*
    30. TODO 提交执行
    31. 1. MiniClusterClient 本地执行
    32. 2. RestClusterClient 提交到Flink Rest服务器接受处理
    33. */
    34. return clusterClient
    35. // TODO 调用RestClient 内部的netty客户端进行提交
    36. .submitJob(jobGraph)
    37. .thenApplyAsync(
    38. FunctionUtils.uncheckedFunction(
    39. jobId -> {
    40. ClientUtils.waitUntilJobInitializationFinished(
    41. () -> clusterClient.getJobStatus(jobId).get(),
    42. () -> clusterClient.requestJobResult(jobId).get(),
    43. userCodeClassloader);
    44. return jobId;
    45. }))
    46. .thenApplyAsync(
    47. jobID ->
    48. (JobClient)
    49. new ClusterClientJobClientAdapter<>(
    50. clusterClientProvider,
    51. jobID,
    52. userCodeClassloader))
    53. .whenCompleteAsync((ignored1, ignored2) -> clusterClient.close());
    54. }
    55. }

    在这里做了以下的工作:

    1、通过PipelineExecutorUtils.getJobGraph方法,根据StreamGraph获取JobGraph。

    2、构建了一个ClusterDescriptor对象,并使用此对象构建出ClusterClientProvider,进而构建出我们真正进行提交的对象ClusterClient。

    我们首先来看ClusterClientProvider的构建过程,点进clusterDescriptor.retrieve方法,选择StandaloneClusterDescriptor实现:

    1. @Override
    2. public ClusterClientProvider retrieve(
    3. StandaloneClusterId standaloneClusterId) throws ClusterRetrieveException {
    4. return () -> {
    5. try {
    6. // TODO
    7. return new RestClusterClient<>(config, standaloneClusterId);
    8. } catch (Exception e) {
    9. throw new RuntimeException("Couldn't retrieve standalone cluster", e);
    10. }
    11. };
    12. }

    在这个方法里,初始化并返回了一个RestClusterClient,我们来看构造方法:

    1. private RestClusterClient(
    2. Configuration configuration,
    3. @Nullable RestClient restClient,
    4. T clusterId,
    5. WaitStrategy waitStrategy,
    6. ClientHighAvailabilityServices clientHAServices)
    7. throws Exception {
    8. this.configuration = checkNotNull(configuration);
    9. // TODO 解析配置
    10. this.restClusterClientConfiguration =
    11. RestClusterClientConfiguration.fromConfiguration(configuration);
    12. if (restClient != null) {
    13. this.restClient = restClient;
    14. } else {
    15. // TODO 构建一个RestClient
    16. // TODO 内部其实就是构建了一个Netty客户端
    17. this.restClient =
    18. new RestClient(
    19. restClusterClientConfiguration.getRestClientConfiguration(),
    20. executorService);
    21. }
    22. this.waitStrategy = checkNotNull(waitStrategy);
    23. this.clusterId = checkNotNull(clusterId);
    24. this.clientHAServices = checkNotNull(clientHAServices);
    25. this.webMonitorRetrievalService = clientHAServices.getClusterRestEndpointLeaderRetriever();
    26. this.retryExecutorService =
    27. Executors.newSingleThreadScheduledExecutor(
    28. new ExecutorThreadFactory("Flink-RestClusterClient-Retry"));
    29. // TODO 监听WebMonitorEndpoint的地址改变
    30. startLeaderRetrievers();
    31. }

    在这里主要完成了三个工作:

    1、解析配置

    2、构建Netty客户端

    3、监听WebMonitorEndpoint的地址改变

    在完成了Netty客户端的构建之后,我们继续看JobGraph的提交,我们继续看这段代码:

    1. /*
    2. TODO 提交执行
    3. 1. MiniClusterClient 本地执行
    4. 2. RestClusterClient 提交到Flink Rest服务器接受处理
    5. */
    6. return clusterClient
    7. // TODO 调用RestClient 内部的netty客户端进行提交
    8. .submitJob(jobGraph)
    9. .thenApplyAsync(
    10. FunctionUtils.uncheckedFunction(
    11. jobId -> {
    12. ClientUtils.waitUntilJobInitializationFinished(
    13. () -> clusterClient.getJobStatus(jobId).get(),
    14. () -> clusterClient.requestJobResult(jobId).get(),
    15. userCodeClassloader);
    16. return jobId;
    17. }))
    18. .thenApplyAsync(
    19. jobID ->
    20. (JobClient)
    21. new ClusterClientJobClientAdapter<>(
    22. clusterClientProvider,
    23. jobID,
    24. userCodeClassloader))
    25. .whenCompleteAsync((ignored1, ignored2) -> clusterClient.close());

    我们点进提交方法clusterClient.submitJob,选择RestClusterClient实现,这个方法很长,我们拆分开分析,首先是这段代码:

    1. CompletableFuture jobGraphFileFuture =
    2. CompletableFuture.supplyAsync(
    3. () -> {
    4. try {
    5. final java.nio.file.Path jobGraphFile =
    6. Files.createTempFile("flink-jobgraph", ".bin");
    7. try (ObjectOutputStream objectOut =
    8. new ObjectOutputStream(
    9. Files.newOutputStream(jobGraphFile))) {
    10. objectOut.writeObject(jobGraph);
    11. }
    12. return jobGraphFile;
    13. } catch (IOException e) {
    14. throw new CompletionException(
    15. new FlinkException("Failed to serialize JobGraph.", e));
    16. }
    17. },
    18. executorService);

    在这段代码里,进行的工作就是将JobGraph进行持久化,持久化成一个JobGraphFile,这个file的前缀是flink-jobgraph,后缀是 .bin。我们在提交JobGraph到Flink集群运行的时候,其实提交的就是这个文件,最终由Flink集群的WebMonitor(JobSubmitHandler)去接收请求来执行处理。JobSubmitHandler在执行处理的第一件事就是把接收到的文件反序列化得到JobGraph对象。

    我们继续看下一段代码:

    1. /*
    2. TODO 等待持久化完成之后,将JobGraphFile加入待上传的文件列表
    3. */
    4. CompletableFuture>> requestFuture =
    5. jobGraphFileFuture.thenApply(
    6. jobGraphFile -> {
    7. List jarFileNames = new ArrayList<>(8);
    8. List artifactFileNames =
    9. new ArrayList<>(8);
    10. Collection filesToUpload = new ArrayList<>(8);
    11. // TODO 将JobGraphFile加入待上传的文件列表
    12. filesToUpload.add(
    13. new FileUpload(
    14. jobGraphFile, RestConstants.CONTENT_TYPE_BINARY));
    15. // TODO 上传Job所需的jar
    16. for (Path jar : jobGraph.getUserJars()) {
    17. jarFileNames.add(jar.getName());
    18. filesToUpload.add(
    19. new FileUpload(
    20. Paths.get(jar.toUri()),
    21. RestConstants.CONTENT_TYPE_JAR));
    22. }
    23. ... ...
    24. // TODO 构建提交任务的请求体,包含对应的一些资源,主要是JobGraph的持久化文件和对应的依赖jar
    25. final JobSubmitRequestBody requestBody =
    26. new JobSubmitRequestBody(
    27. jobGraphFile.getFileName().toString(),
    28. jarFileNames,
    29. artifactFileNames);
    30. // TODO 返回一个Tuple2,包含两个内容: requestBody和filesToUpload
    31. return Tuple2.of(
    32. requestBody, Collections.unmodifiableCollection(filesToUpload));
    33. });

    在这段代码里,将JobGraphFile加入待上传的文件列表,并将job所需的jar也加入此列表,最后构建提交任务的requestBody,这个requestBody中包含了所需的一些资源,主要是JobGraph的持久化文件和对应的依赖jar。

    我们继续来看下一段代码:

    1. // TODO 发送请求
    2. final CompletableFuture submissionFuture =
    3. requestFuture.thenCompose(
    4. requestAndFileUploads ->
    5. // TODO 提交
    6. sendRetriableRequest(
    7. JobSubmitHeaders.getInstance(),
    8. EmptyMessageParameters.getInstance(),
    9. requestAndFileUploads.f0,
    10. requestAndFileUploads.f1,
    11. isConnectionProblemOrServiceUnavailable()));

    在这段代码里,我们进行了JobGraph的提交,我们点进sendRetriableRequest方法:

    1. private <
    2. M extends MessageHeaders,
    3. U extends MessageParameters,
    4. R extends RequestBody,
    5. P extends ResponseBody>
    6. CompletableFuture

      sendRetriableRequest(

    7. M messageHeaders,
    8. U messageParameters,
    9. R request,
    10. Collection filesToUpload,
    11. Predicate retryPredicate) {
    12. // TODO 可重试机制
    13. return retry(
    14. () ->
    15. // TODO 获取主节点JobManager中的WebMonitorEndpoint的地址
    16. // TODO 其实客户端提交JobGraph就是提交给WebMonitorEndpoint
    17. getWebMonitorBaseUrl()
    18. .thenCompose(
    19. webMonitorBaseUrl -> {
    20. try {
    21. /*
    22. TODO 提交Request给WebMonitorEndpoint,最终由JobSubmitHandler来执行请求处理
    23. 通过 Http Restful方式提交
    24. */
    25. return restClient.sendRequest(
    26. webMonitorBaseUrl.getHost(),
    27. webMonitorBaseUrl.getPort(),
    28. messageHeaders,
    29. messageParameters,
    30. request,
    31. filesToUpload);
    32. } catch (IOException e) {
    33. throw new CompletionException(e);
    34. }
    35. }),
    36. retryPredicate);
    37. }

    在这个方法里,首先获取WebMonitorEndpoint的地址,然后再通过http restfull的方式提交了作业任务,我们继续来看提交流程,点进restClient.sendRequest方法:

    1. @Override
    2. public <
    3. M extends MessageHeaders,
    4. U extends MessageParameters,
    5. R extends RequestBody,
    6. P extends ResponseBody>
    7. CompletableFuture

      sendRequest(

    8. final String targetAddress,
    9. final int targetPort,
    10. final M messageHeaders,
    11. final U messageParameters,
    12. final R request,
    13. final Collection files)
    14. throws IOException {
    15. if (failHttpRequest.test(messageHeaders, messageParameters, request)) {
    16. return FutureUtils.completedExceptionally(new IOException("expected"));
    17. } else {
    18. // TODO 继续提交
    19. return super.sendRequest(
    20. targetAddress,
    21. targetPort,
    22. messageHeaders,
    23. messageParameters,
    24. request,
    25. files);
    26. }
    27. }

    再点进super.sendRequest方法:

    1. public <
    2. M extends MessageHeaders,
    3. U extends MessageParameters,
    4. R extends RequestBody,
    5. P extends ResponseBody>
    6. CompletableFuture

      sendRequest(

    7. String targetAddress,
    8. int targetPort,
    9. M messageHeaders,
    10. U messageParameters,
    11. R request,
    12. Collection fileUploads,
    13. RestAPIVersion apiVersion)
    14. throws IOException {
    15. Preconditions.checkNotNull(targetAddress);
    16. Preconditions.checkArgument(
    17. NetUtils.isValidHostPort(targetPort),
    18. "The target port " + targetPort + " is not in the range [0, 65535].");
    19. Preconditions.checkNotNull(messageHeaders);
    20. ... ...
    21. ... ...
    22. /*
    23. TODO 处理得到url,然后决定使用WebMonitorEndpoint中的哪个Handler来执行处理
    24. */
    25. String versionedHandlerURL =
    26. "/" + apiVersion.getURLVersionPrefix() + messageHeaders.getTargetRestEndpointURL();
    27. String targetUrl = MessageParameters.resolveUrl(versionedHandlerURL, messageParameters);
    28. LOG.debug(
    29. "Sending request of class {} to {}:{}{}",
    30. request.getClass(),
    31. targetAddress,
    32. targetPort,
    33. targetUrl);
    34. // serialize payload
    35. StringWriter sw = new StringWriter();
    36. objectMapper.writeValue(sw, request);
    37. ByteBuf payload =
    38. Unpooled.wrappedBuffer(sw.toString().getBytes(ConfigConstants.DEFAULT_CHARSET));
    39. // TODO 构建一个http Request对象
    40. Request httpRequest =
    41. createRequest(
    42. targetAddress + ':' + targetPort,
    43. targetUrl,
    44. messageHeaders.getHttpMethod().getNettyHttpMethod(),
    45. payload,
    46. fileUploads);
    47. final JavaType responseType;
    48. final Collection> typeParameters = messageHeaders.getResponseTypeParameters();
    49. if (typeParameters.isEmpty()) {
    50. responseType = objectMapper.constructType(messageHeaders.getResponseClass());
    51. } else {
    52. responseType =
    53. objectMapper
    54. .getTypeFactory()
    55. .constructParametricType(
    56. messageHeaders.getResponseClass(),
    57. typeParameters.toArray(new Class[typeParameters.size()]));
    58. }
    59. // TODO 提交请求
    60. return submitRequest(targetAddress, targetPort, httpRequest, responseType);
    61. }

    在这个方法里,做了以下工作;

    1. 处理得到url,然后决定使用WebMonitorEndpoint中的哪个Handler来执行处理
    2. 构建一个http Request对象
    3. 提交请求

    我们继续看请求的提交,点进submitRequest方法里:

    1. private

      extends ResponseBody> CompletableFuture

      submitRequest(

    2. String targetAddress, int targetPort, Request httpRequest, JavaType responseType) {
    3. /*
    4. TODO 通过netty客户端发送请求给netty服务端
    5. */
    6. final ChannelFuture connectFuture = bootstrap.connect(targetAddress, targetPort);
    7. final CompletableFuture channelFuture = new CompletableFuture<>();
    8. connectFuture.addListener(
    9. (ChannelFuture future) -> {
    10. if (future.isSuccess()) {
    11. channelFuture.complete(future.channel());
    12. } else {
    13. channelFuture.completeExceptionally(future.cause());
    14. }
    15. });
    16. return channelFuture
    17. .thenComposeAsync(
    18. channel -> {
    19. ClientHandler handler = channel.pipeline().get(ClientHandler.class);
    20. CompletableFuture future;
    21. boolean success = false;
    22. try {
    23. if (handler == null) {
    24. throw new IOException(
    25. "Netty pipeline was not properly initialized.");
    26. } else {
    27. // TODO 发送请求数据包到服务端
    28. httpRequest.writeTo(channel);
    29. future = handler.getJsonFuture();
    30. success = true;
    31. }
    32. } catch (IOException e) {
    33. future =
    34. FutureUtils.completedExceptionally(
    35. new ConnectionException(
    36. "Could not write request.", e));
    37. } finally {
    38. if (!success) {
    39. channel.close();
    40. }
    41. }
    42. return future;
    43. },
    44. executor)
    45. .thenComposeAsync(
    46. (JsonResponse rawResponse) -> parseResponse(rawResponse, responseType),
    47. executor);
    48. }

    可以看到,这里使用了bootstrap.connect去连接netty服务端,在连接成功后,调用httpRequest.writeTo(channel);方法发送数据。

            这里的bootstrap是netty客户端的引导程序,主节点启动的时候,启动了WebMonitorEndpoint的组件,这个组件在启动的时候启动了Netty的服务端,然后客户端提交Job的时候,其实是通过RestClient提交的.在初始化RestClient的时候就初始化了Netty客户端。 如果调用 submitRequest(...)方法,就会执行请求的提交,netty客户端链接netty服务端,发送请求,其实就是将Request请求对象的数据写入服务端。

            到此为止,我们的Job就已经提交给了主节点的WebMonitorEndpoint了,在本章中没有对StreamGraph和JobGraph的构建流程进行详细的讲解,我计划在后续章节中分别来分析这两个Graph的构建。

    总结

            在StreamExecutionEnvironment初始化的工作中,主要做了两件事,分别是StateBackend的配置和Checkpoint的配置

            在 Flink 应用程序中,其实所有的操作,都是 StreamOperator,分为 SourceOperator, StreamOperator,SinkOperator,然后能被优化的 Operator 就会 chain 在一起,形成一个 OperatorChain。

            算子的转换流程为: Function => StreamOperator => Transformation => OperatorChain(并行化之后,得到 StreamTask 执行)。

            在env.execute环节中,根据我们构建的Transformations集合,构建出StreamGraph,再将StreamGraph转化为JobGraph,并将JobGraph持久化,最终将我们的JobGraphFile以及依赖Jar以及其他一些配置构建为一个RequestBody,通过RestClient内部构建的Netty客户端发送至JobManager中的WebMonitorEndpoint中的Netty 服务端,再由Netty服务端解析url交给对应的handler处理。        

  • 相关阅读:
    EdgeX Foundry 架构介绍
    手撕常见JS面试题
    【链表】合并k个已排序的链表
    猿创征文|Spring系列框架之面向切面编程AOP
    Spring容器加载Bean和JVM加载类
    Kutools for Excel 结合 300 多种高级功能和工具
    2022杭电多校4
    java基础之组合和继承
    【python】遇上COS美图怎么办?当然是大胆冲呀~
    Spring Cloud Alibaba 学习笔记
  • 原文地址:https://blog.csdn.net/EdwardWong_/article/details/126745311