1.

 ERROR org.apache.spark.storage.DiskBlockObjectWriter: Uncaught exception while reverting partial writes to file /home/work/hdd9/yarn/ttttt-hadoop/nodemanager/usercache/s_sales/appcache/application_1597370851926_759263/blockmgr-7aa07b85-2ee1-4b1b-9eb1-62e09a2284b6/0b/temp_shuffle_50a8ebc8-167e-4ede-b3e4-b41cade68a21

解决:

  • 如果有此日志2020-08-19,22:12:32,336 INFO org.apache.spark.executor.Executor: Executor is trying to kill task 16.0 in stage 21.0 (TID 2663), reason: another attempt succeeded 说明是推测执行了,单个任务执行时间很短的时候容易出现推测执行task比较多的情况,可以关闭;如果不需要的,将推测执行关掉就不会有这个问题了,但是可能出现慢节点导致个别task运行很慢
  • 表面上看是因为shuffle没有地方写了,如果后面的stack是local space 的问题,那么清一下磁盘就好了。
  • 上面这种问题,是因为一个excutor给分配的内存不够,此时,减少excutor-core的数量,加大excutor-memory的值应该就没有问题。

2. 报错如下:

2020-08-25,14:54:29,675 ERROR org.apache.spark.sql.execution.datasources.FileFormatWriter: Aborting job null.
org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange RoundRobinPartitioning(1)
+- *(1) Filter (((isnotnull(event#64) && isnotnull(area#66)) && (event#64 = VISIBLE)) && (area#66 = ad_to_shop))
   +- Scan ExistingRDD[uid#51L,session#52,time#53L,appversion#54,imei#55,customosversion#56,os#57,osversion#58,phone#59,uuid#60,page#61,v#62,id#63,event#64,spend#65L,area#66,iid#67,method#68,ref#69,pid#70,gid#71,listtype#72,posx#73,posy#74,... 27 more fields]

	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1$$anonfun$apply$2.apply(SparkPlan.scala:160)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1$$anonfun$apply$2.apply(SparkPlan.scala:160)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:160)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:156)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:184)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:181)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:156)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:180)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:154)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1$$anonfun$apply$2.apply(SparkPlan.scala:160)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1$$anonfun$apply$2.apply(SparkPlan.scala:160)
	at scala.Option.getOrElse(Option.scala:121)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:160)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:156)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:184)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:181)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:156)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:112)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:112)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
	at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:656)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:78)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:656)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:225)
	at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:644)
	at com.xiaomi.youpin.brandsquare_clicknum_visiblenum_ctr_ab$.main(brandsquare_clicknum_visiblenum_ctr_ab.scala:31)
	at com.xiaomi.youpin.brandsquare_clicknum_visiblenum_ctr_ab.main(brandsquare_clicknum_visiblenum_ctr_ab.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$4.run(ApplicationMaster.scala:724)
Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string
	at org.apache.hadoop.fs.Path.checkPathArg(Path.java:127)
	at org.apache.hadoop.fs.Path.<init>(Path.java:135)
	at org.apache.hadoop.util.StringUtils.stringToPath(StringUtils.java:244)
	at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:409)
	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$30.apply(SparkContext.scala:1044)
	at org.apache.spark.SparkContext$$anonfun$hadoopFile$1$$anonfun$30.apply(SparkContext.scala:1044)
	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$5$$anonfun$apply$3.apply(HadoopRDD.scala:177)
	at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$5$$anonfun$apply$3.apply(HadoopRDD.scala:177)

解决:Caused by: java.lang.IllegalArgumentException: Can not create a Path from an empty string

check一下输入路径

3.

org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: 
Invalid resource request, requested resource type=[vcores] < 0 or greater 
than maximum allowed allocation. Requested resource=<memory:4505, vCores:5>,
 maximum allowed allocation=<memory:16384, vCores:4>, 
please note that maximum allowed allocation is calculated by scheduler
 based on maximum resource of registered NodeManagers, which might be less 
than configured maximum allocation=<memory:16384, vCores:4>

 解决:资源不够

4.

2020-08-27,14:10:53,446 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 332.0 in stage 0.0 (TID 17, zjy-hadoop-prc-st4641.bj, executor 62): org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in file hdfs://zc-hadoop/user/h_data_platform/platform/dw/dwm_ypord_ord_item_df/date=20200825/part-00332-2a163fba-8f44-4897-ba2d-09818ca8e53e-c000.snappy.parquet

解决:读取的文件的schema和读取设置的schema不对应导致的

5.

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1433 in stage 3.0 failed 4 times, most recent failure: Lost task 1433.3 in stage 3.0 (TID 2112, mb2-hadoop-prc-st1114.awsind, executor 377): org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-1408582172-10.75.5.20-1519387084415:blk_1153815625_80133107 file=/user/s_lcs/miuiads/miuiads_upload_log/year=2020/month=08/day=26/awsind0-fusion-talos_miuiads_upload_log_mb2-hadoop-prc-transfer44.awsind_56_20200826-184607at org.apache.hadoop.hdfs.DFSInputStream.refetchLocations(DFSInputStream.java:1001)

解决:hdfs文件损坏了

6.

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Total size of serialized results of 347580 tasks (10.0 GB) is bigger than spark.driver.maxResultSize (10.0 GB)

解决:调整maxResultSize得数值;如:conf.set(“spark.driver.maxResultSize”, “20g”)

7.无明显报错,但是程序就是失败

Log Type: spark.log

Log Upload Time: Mon Aug 31 18:01:31 +0800 2020

Log Length: 19563552

Showing 4096 bytes of 19563552 total. Click here for the full log.

 392984.0 in stage 251.0 (TID 168896) in 35504 ms on tmp-hadoop-prc-st3918.bj (executor 490) (55928/640400)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 594156.0 in stage 251.0 (TID 196379, tmp-hadoop-prc-st2620.bj, executor 585, partition 594156, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 337389.0 in stage 251.0 (TID 196380, tmp-hadoop-prc-st2844.bj, executor 601, partition 337389, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 571345.0 in stage 251.0 (TID 168858) in 35511 ms on tmp-hadoop-prc-st3116.bj (executor 515) (55929/640400)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 123737.0 in stage 251.0 (TID 196381, tmp-hadoop-prc-st3895.bj, executor 511, partition 123737, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 88175.0 in stage 251.0 (TID 168889) in 35506 ms on tmp-hadoop-prc-st1994.bj (executor 522) (55930/640400)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 39519.0 in stage 251.0 (TID 196382, tmp-hadoop-prc-st3695.bj, executor 600, partition 39519, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 239005.0 in stage 251.0 (TID 196383, tmp-hadoop-prc-st116.bj, executor 514, partition 239005, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 136789.0 in stage 251.0 (TID 168890) in 35506 ms on tmp-hadoop-prc-st3410.bj (executor 523) (55931/640400)
2020-08-31,18:01:12,349 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 171920.0 in stage 251.0 (TID 196384, tmp-hadoop-prc-st5013.bj, executor 548, partition 171920, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 536969.0 in stage 251.0 (TID 168899) in 35504 ms on tmp-hadoop-prc-st1244.bj (executor 529) (55932/640400)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 39520.0 in stage 251.0 (TID 196385, tmp-hadoop-prc-st3695.bj, executor 600, partition 39520, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 95495.0 in stage 251.0 (TID 196386, tmp-hadoop-prc-st66.bj, executor 579, partition 95495, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 586618.0 in stage 251.0 (TID 168893) in 35506 ms on tmp-hadoop-prc-st146.bj (executor 508) (55933/640400)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 572696.0 in stage 251.0 (TID 196387, tmp-hadoop-prc-st4478.bj, executor 550, partition 572696, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 88176.0 in stage 251.0 (TID 168910) in 35504 ms on tmp-hadoop-prc-st1994.bj (executor 522) (55934/640400)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 188579.0 in stage 251.0 (TID 196388, tmp-hadoop-prc-st3541.bj, executor 532, partition 188579, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 263777.0 in stage 251.0 (TID 196389, tmp-hadoop-prc-st1022.bj, executor 591, partition 263777, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,350 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 586617.0 in stage 251.0 (TID 168888) in 35507 ms on tmp-hadoop-prc-st146.bj (executor 508) (55935/640400)
2020-08-31,18:01:12,351 INFO org.apache.spark.scheduler.TaskSetManager: Starting task 123674.0 in stage 251.0 (TID 196390, tmp-hadoop-prc-st3821.bj, executor 538, partition 123674, NODE_LOCAL, 5616 bytes)
2020-08-31,18:01:12,351 INFO org.apache.spark.scheduler.TaskSetManager: Finished task 7414.0 in stage 251.0 (TID 168950) in 35499 ms on tmp-hadoop-prc-st237.bj (executor 520) (55936/640400)

 

解决:从日志可以看的partition数量太多了,调小core得参数就好了(注:具体这个问题我感觉也没整的特明白,还是资源使用问题)

8.java.lang.Exception: org.apache.spark.sql.AnalysisException: java.lang.ExceptionInInitializerError: null;

解决:后来运维同学在后台找到更详细日志,发现有:Unrecognized Hadoop major version number: 3.1.0-mdh3.1.0.0-SNAPSHOT;问题就是:hadoop在灰度新版本,hive是个老版本识别不了这个hadoop版本;重试就好,后期升级hive

9.

报错:ValueError: (2006, “MySQL server has gone away (BrokenPipeError(32, ‘Broken pipe’))”)
原因:插入的数据超过了4M,超过了MySQL默认的max_allowed_packet(默认值是4M),无法将数据插入到MySQL中.

解决:1.dba调整大小;地址:https://www.freesion.com/article/9288880619/

2.轮询插入数据,减小一次性插入得数据量

10.

2020-09-03,19:50:16,844 INFO org.apache.spark.executor.Executor: Executor is trying to kill task 151.0 in stage 98.0 (TID 43579), reason: another attempt succeeded
2020-09-03,19:50:16,844 INFO org.apache.spark.executor.Executor: Executor is trying to kill task 313.0 in stage 98.0 (TID 43741), reason: another attempt succeeded
2020-09-03,19:50:16,861 INFO org.apache.spark.util.collection.ExternalAppendOnlyMap: Thread 85 heap info: Number of full GC events before check: 432. Number of full GC events after check: 433, new gen utilization: 1%, new gen used: 35.6 MB, new gen capacity: 3.3 GB, old gen utilization: 80%, old gen size: 5.4 GB, old gen capacity: 6.7 GB
2020-09-03,19:50:16,861 INFO org.apache.spark.util.collection.ExternalAppendOnlyMap: 
Thread 85 spill conditions:
 ================================================================================
 shouldSpill: false
 ================================================================================
 shouldSpillForExceedOldGenDeltaThreshold: false = oldGenUsed(5.4 GB) - baseOldGenUsed(5.4 GB) >= forceSpillOldGenDelta(800.0 MB)
 ================================================================================
 shouldSpillForPromotionSafe: false = hasFullGCSinceLastSpill(true) && oldGenCapacity(6.7 GB) - oldGenUsed(5.4 GB) < promotedAvg(3.7 MB) * promotionReserveFraction(1.5)
 ================================================================================
 shouldSpillForIneffectiveGC: false = currentTime(1599133816861) > fastSpillingScheduleTime(9223372036854775807) || fastSpillingCount(0) >= fastSpillThreshold(3)
           
2020-09-03,19:50:16,861 INFO org.apache.spark.util.collection.ExternalAppendOnlyMap: 
Shrink spilling threshold adaptively:
currentForceSpillOldGenDelta: 800.0 MB -> -5797267496.0 B
             
2020-09-03,19:50:18,198 INFO org.apache.spark.executor.Executor: Executor killed task 232.0 in stage 98.0 (TID 43660), reason: another attempt succeeded
2020-09-03,19:50:18,199 INFO org.apache.spark.util.collection.ExternalAppendOnlyMap: Initial heap memory state: new gen utilization: 0%, new gen size: 0.0 B, new gen capacity: 3.3 GB, old gen utilization: 80%, old gen size: 5.4 GB, old gen capacity: 6.7 GB
2020-09-03,19:50:18,200 INFO org.apache.spark.executor.Executor: Executor killed task 313.0 in stage 98.0 (TID 43741), reason: another attempt succeeded
2020-09-03,19:50:18,201 INFO org.apache.spark.executor.Executor: Executor killed task 151.0 in stage 98.0 (TID 43579), reason: another attempt succeeded
2020-09-03,19:50:18,201 INFO org.apache.spark.executor.CoarseGrainedExecutorBackend: Got assigned task 44575
2020-09-03,19:50:18,201 INFO org.apache.spark.executor.Executor: Running task 0.0 in stage 101.0 (TID 44575)
2020-09-03,19:50:18,202 ERROR org.apache.spark.network.client.TransportResponseHandler: Still have 3 requests outstanding when connection from tmp-hadoop-prc-st3689.bj/10.155.1.8:20809 is closed
2020-09-03,19:50:18,202 INFO org.apache.spark.executor.CoarseGrainedExecutorBackend: Got assigned task 44576
2020-09-03,19:50:18,202 INFO org.apache.spark.executor.Executor: Running task 2.0 in stage 101.0 (TID 44576)
2020-09-03,19:50:18,202 INFO org.apache.spark.executor.CoarseGrainedExecutorBackend: Got assigned task 44577
2020-09-03,19:50:18,202 INFO org.apache.spark.network.shuffle.RetryingBlockFetcher: Retrying fetch (1/3) for 3 outstanding blocks after 5000 ms
2020-09-03,19:50:18,202 INFO org.apache.spark.broadcast.TorrentBroadcast: Started reading broadcast variable 138 with 1 pieces (estimated total size 4.0 MB)
2020-09-03,19:50:18,203 INFO org.apache.spark.executor.Executor: Running task 3.0 in stage 101.0 (TID 44577)
2020-09-03,19:50:21,769 INFO org.apache.spark.util.collection.ExternalAppendOnlyMap: Initial heap memory state: new gen utilization: 0%, new gen size: 0.0 B, new gen capacity: 3.3 GB, old gen utilization: 80%, old gen size: 5.4 GB, old gen capacity: 6.7 GB
2020-09-03,19:50:21,773 WARN org.apache.spark.network.server.TransportChannelHandler: Exception in connection from tmp-hadoop-prc-st1934.bj/10.152.112.26:20809
java.io.IOException: Connection reset by peer
	at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
	at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
	at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
	at sun.nio.ch.IOUtil.read(IOUtil.java:192)
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380

解决:频繁gc+连接断掉;并且通过观察代码块打印日志,发现跟hdfs交互得地方变得很慢;我们这次是由于平台升级+namespace负载太高;

11. Diagnostics: Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Killed by external signal

解决:是由于物理内存达到限制,导致container被kill掉报错。具体要看sparkui上的信息,有可能是数据倾斜。

12.对特征进行处理的时候,我们为了做特征本地化需要根据某条特征所在的城市切分shard,并且很多其他信息要从多个hive表读取,一期为了做到通用化将多来源数据构造了DataFrame,注册成内存表然后通过循环执行SparkSql将结果写入到HDFS目录中,此方案可行灵活性不错,但是在shard非常多的情况下,执行时间甚至超过一个多小时,完全不能忍……

      后面调研各种信息及源码后,做了代码上的修改,将构造的DataFrame中的row构造成Pair<shardNum,row>,并重写

RDDMultipleTextOutputFormat,将结果根据key信息直接写入到多个文件中,而不再需要进行循环执行SparkSql,性能直接提升十几倍以上。

转自:
https://blog.csdn.net/shijinxin3907837/article/details/108143829