不好意思,找了半天沒找到有關Flink類似的分組就放在這了,如果不符合要求還勞煩管理員幫忙移動下,3Q。
--------------------------------------------------------分割線--------------------------------------------------------
目前已經搭建完成HDFS+YARN和Flink獨立集群,主機如下:
Flink獨立集群:
f-node-01
f-node-02
Hadoop集群:
h-name-01
h-name-02
h-data-01
h-data-02
使用Flink獨立集群怎么使用都是沒問題的,這個經過使用驗證。現在想將Flink提交至YARN集群中去,采用Flink On Yarn的方式來運行Flink任務。選擇的獨立Flink Job方式從f-node-01節點提交任務到h-name-01的yarn節點。已經將yarn-site.xml復制到了f-node-01的節點上.同時配置了YARN_CONF_DIR環境變數。但是在執行 [ ./flink run -m yarn-cluster -yjm 2048m -ytm 2048m /tmp/flink-demo.jar ]命令時候,提示如下錯誤:
--------------------------------------------------------------------------------------------------------------
Caused by: org.apache.flink.yarn.YarnClusterDescriptor$YarnDeploymentException: The YARN application unexpectedly switched to state FAILED during deployment.
Diagnostics from YARN: Application application_1603786544958_0001 failed 2 times in previous 10000 milliseconds due to AM Container for appattempt_1603786544958_0001_000002 exited with exitCode: -1000
Failing this attempt.Diagnostics: File file:/root/.flink/application_1603786544958_0001/lib/flink-table_2.11-1.11.2.jar does not exist
java.io.FileNotFoundException: File file:/root/.flink/application_1603786544958_0001/lib/flink-table_2.11-1.11.2.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:635)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:861)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:625)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)
at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:253)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:63)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:361)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:358)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
For more detailed output, check the application tracking page: http://h-name-01:8088/cluster/app/application_1603786544958_0001 Then click on links to logs of each attempt.
--------------------------------------------------------------------------------------------------------------
請問這個問題是什么原因導致的呢?
uj5u.com熱心網友回復:
能看得到引起這個問題的原因YARN容器是找不到Flink的JAR所導致,但是該怎么讓YARN能在使用Flink發布到YARN上的時候自動上傳Flink Jar呢?轉載請註明出處,本文鏈接:https://www.uj5u.com/qita/194678.html
標籤:Spark
上一篇:鏡像分層是怎么理解的?
下一篇:學計算機專業你后悔嗎?為什么?