请问这种问题应该如何解决呢?执行命令:bin/spark-shell -master yarn-client出错:

0
```
16/11/03 04:12:37 INFO yarn.Client:
client token: N/A
diagnostics: Application application14781692719790001 failed 2 times due to AM Container for appattempt14781692719790001000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://myc-1:8088/cluster/app/ ... 1Then, click on links to logs of each attempt.
Diagnostics: Container [pid=8536,containerID=container1478169271979000102_000001] is running beyond virtual memory limits. Current usage: 15.9 MB of 1 GB physical memory used; 2.1 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1478169271979_0001_02_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 8540 8536 8536 8536 (java) 3 2 2151677952 3769 /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1478169271979_0001/container_1478169271979_0001_02_000001/tmp -Dspark.yarn.app.container.log.dir=/data/software/hadoop-2.7.3/logs/userlogs/application_1478169271979_0001/container_1478169271979_0001_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher —arg 192.168.141.128:36263 —executor-memory 1024m —executor-cores 1 —properties-file /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1478169271979_0001/container_1478169271979_0001_02_000001/__spark_conf/__spark_conf.properties
|- 8536 8534 8536 8536 (bash) 0 1 108650496 304 /bin/bash -c /usr/lib/jvm/jre-1.8.0-openjdk.x86_64/bin/java -server -Xmx512m -Djava.io.tmpdir=/tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1478169271979_0001/container_1478169271979_0001_02_000001/tmp -Dspark.yarn.app.container.log.dir=/data/software/hadoop-2.7.3/logs/userlogs/application_1478169271979_0001/container_1478169271979_0001_02_000001 org.apache.spark.deploy.yarn.ExecutorLauncher —arg ‘192.168.141.128:36263’ —executor-memory 1024m —executor-cores 1 —properties-file /tmp/hadoop-root/nm-local-dir/usercache/root/appcache/application_1478169271979_0001/container_1478169271979_0001_02_000001/__spark_conf/__spark_conf.properties 1> /data/software/hadoop-2.7.3/logs/userlogs/application_1478169271979_0001/container_1478169271979_0001_02_000001/stdout 2> /data/software/hadoop-2.7.3/logs/userlogs/application_1478169271979_0001/container_1478169271979_0001_02_000001/stderr

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1478171447411
final status: FAILED
tracking URL: http://myc-1:8088/cluster/app/ ... _0001
user: root
16/11/03 04:12:37 INFO yarn.Client: Deleting staging directory .sparkStaging/application_1478169271979_0001
16/11/03 04:12:38 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:1017)
at $line3.$readiwCiwCiwC.(:15)
at $line3.$read$$iwC.(:24)
at $line3.$read.(:26)
at $line3.$read$.(:30)
at $line3.$read$.()
at $line3.$eval$.(:7)
at $line3.$eval$.()
at $line3.$eval.$print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:125)
at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:124)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:974)
at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:159)
at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:108)
at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:991)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945)
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945)
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059)
at org.apache.spark.repl.Main$.main(Main.scala:31)
已邀请:
0

regan - run! run! run! happy runner! 我是奔跑的小米~ 2017-02-19 回答

Diagnostics: Container [pid=8536,containerID=container1478169271979000102_000001] is running beyond virtual memory limits.
你是在虚拟机上跑?超出了虚拟内存的大小。增大虚拟机的内存,或者通过--driver-memory设置driver进程的对大小,适当调小一些

要回复问题请先登录注册