spark-sql启动报错util.NativeCodeLoader

运行环境 | centos 7.0 | spark 2.2.0 | scala-2.11.8 | mysql-connector-java-5.1.38.jar

启动saprk-sql报错

  • 19/03/16 20:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
[root@master bin]# ./spark-sql --master spark://master:7077 --jars /root/mysql-connector-java-5.1.38.jar 
19/03/16 20:40:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/03/16 20:40:48 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
19/03/16 20:40:48 INFO metastore.ObjectStore: ObjectStore, initialize called
19/03/16 20:40:49 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
19/03/16 20:40:49 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
19/03/16 20:40:58 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
19/03/16 20:41:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
19/03/16 20:41:03 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
19/03/16 20:41:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
19/03/16 20:41:05 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
19/03/16 20:41:06 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
19/03/16 20:41:06 INFO metastore.ObjectStore: Initialized ObjectStore
19/03/16 20:41:07 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
19/03/16 20:41:07 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
19/03/16 20:41:11 INFO metastore.HiveMetaStore: Added admin role in metastore
19/03/16 20:41:11 INFO metastore.HiveMetaStore: Added public role in metastore
19/03/16 20:41:11 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
19/03/16 20:41:12 INFO metastore.HiveMetaStore: 0: get_all_databases
19/03/16 20:41:12 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_all_databases
19/03/16 20:41:12 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
19/03/16 20:41:12 INFO HiveMetaStore.audit: ugi=root ip=unknown-ip-addr cmd=get_functions: db=default pat=*
19/03/16 20:41:12 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
19/03/16 20:41:13 INFO session.SessionState: Created local directory: /tmp/6a909914-1d3e-40f9-b997-ec14d17ef1db_resources
19/03/16 20:41:14 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/6a909914-1d3e-40f9-b997-ec14d17ef1db
19/03/16 20:41:14 INFO session.SessionState: Created local directory: /tmp/root/6a909914-1d3e-40f9-b997-ec14d17ef1db
19/03/16 20:41:14 INFO session.SessionState: Created HDFS directory: /tmp/hive/root/6a909914-1d3e-40f9-b997-ec14d17ef1db/_tmp_space.db
19/03/16 20:41:15 INFO spark.SparkContext: Running Spark version 2.2.0
19/03/16 20:41:15 INFO spark.SparkContext: Submitted application: SparkSQL::192.168.83.10
19/03/16 20:41:15 INFO spark.SecurityManager: Changing view acls to: root
19/03/16 20:41:15 INFO spark.SecurityManager: Changing modify acls to: root
19/03/16 20:41:15 INFO spark.SecurityManager: Changing view acls groups to:
19/03/16 20:41:15 INFO spark.SecurityManager: Changing modify acls groups to:
19/03/16 20:41:15 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
19/03/16 20:41:17 INFO util.Utils: Successfully started service 'sparkDriver' on port 58142.
19/03/16 20:41:17 INFO spark.SparkEnv: Registering MapOutputTracker
19/03/16 20:41:17 INFO spark.SparkEnv: Registering BlockManagerMaster
19/03/16 20:41:17 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
19/03/16 20:41:17 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
19/03/16 20:41:17 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-f02e18ee-fc40-4a05-80ba-0c60c369ed5a
19/03/16 20:41:17 INFO memory.MemoryStore: MemoryStore started with capacity 413.9 MB
19/03/16 20:41:18 INFO spark.SparkEnv: Registering OutputCommitCoordinator
19/03/16 20:41:18 INFO util.log: Logging initialized @42429ms
19/03/16 20:41:19 INFO server.Server: jetty-9.3.z-SNAPSHOT
19/03/16 20:41:19 INFO server.Server: Started @42899ms
19/03/16 20:41:19 WARN util.Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
19/03/16 20:41:19 INFO server.AbstractConnector: Started ServerConnector@2e49b91c{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
19/03/16 20:41:19 INFO util.Utils: Successfully started service 'SparkUI' on port 4041.
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@18c7f6b5{/jobs,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@53bb71e5{/jobs/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@15994b0b{/jobs/job,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4cb00fa5{/jobs/job/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@698d6d30{/stages,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3407aa4f{/stages/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@538b3c88{/stages/stage,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@22ae905f{/stages/stage/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fbaa7f5{/stages/pool,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6d4a05f7{/stages/pool/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4b476233{/storage,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4f235e8e{/storage/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@29dcdd1c{/storage/rdd,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@524f5ea5{/storage/rdd/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@17134190{/environment,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5599b5bb{/environment/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4264beb8{/executors,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7cd3e0da{/executors/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@67e77f52{/executors/threadDump,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1d1bf7bf{/executors/threadDump/json,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d43a1b7{/static,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@44aa91e2{/,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@650a1aff{/api,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@11c7a0b4{/jobs/job/kill,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@653a5967{/stages/stage/kill,null,AVAILABLE,@Spark}
19/03/16 20:41:19 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.83.10:4041
19/03/16 20:41:19 INFO spark.SparkContext: Added JAR file:/root/mysql-connector-java-5.1.38.jar at spark://192.168.83.10:58142/jars/mysql-connector-java-5.1.38.jar with timestamp 1552740079770
19/03/16 20:41:20 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://master:7077...
19/03/16 20:41:20 INFO client.TransportClientFactory: Successfully created connection to master/192.168.83.10:7077 after 132 ms (0 ms spent in bootstraps)
19/03/16 20:41:40 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://master:7077...
19/03/16 20:42:00 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://master:7077...
19/03/16 20:42:20 ERROR cluster.StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
19/03/16 20:42:20 WARN cluster.StandaloneSchedulerBackend: Application ID is not initialized yet.
19/03/16 20:42:20 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34217.
19/03/16 20:42:20 INFO netty.NettyBlockTransferService: Server created on 192.168.83.10:34217
19/03/16 20:42:20 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
19/03/16 20:42:20 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.83.10, 34217, None)
19/03/16 20:42:20 INFO server.AbstractConnector: Stopped Spark@2e49b91c{HTTP/1.1,[http/1.1]}{0.0.0.0:4041}
19/03/16 20:42:20 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.83.10:4041
19/03/16 20:42:20 INFO storage.BlockManagerMasterEndpoint: Registering block manager 192.168.83.10:34217 with 413.9 MB RAM, BlockManagerId(driver, 192.168.83.10, 34217, None)
19/03/16 20:42:20 INFO cluster.StandaloneSchedulerBackend: Shutting down all executors
19/03/16 20:42:20 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.83.10, 34217, None)
19/03/16 20:42:20 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.83.10, 34217, None)
19/03/16 20:42:20 INFO cluster.CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
19/03/16 20:42:20 WARN client.StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
19/03/16 20:42:20 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/03/16 20:42:21 INFO memory.MemoryStore: MemoryStore cleared
19/03/16 20:42:21 INFO storage.BlockManager: BlockManager stopped
19/03/16 20:42:21 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
19/03/16 20:42:21 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/03/16 20:42:21 ERROR spark.SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:293)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:138)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/03/16 20:42:21 INFO spark.SparkContext: SparkContext already stopped.
19/03/16 20:42:21 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:91)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:524)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:48)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.<init>(SparkSQLCLIDriver.scala:293)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:138)
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
19/03/16 20:42:21 INFO util.ShutdownHookManager: Shutdown hook called
19/03/16 20:42:21 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-06aa1468-390b-4e70-9151-405bd556805a
19/03/16 20:42:22 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-6774f957-ed7c-4d8b-b3ce-eaf17bda1df1

问题原因 & 解决方法

  • 由于spark 配置为高可用模式下,要启动zookeeper

启动zookeeper后正常启动

1
2
3
4
5
6
7
8
[root@master sbin]# ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /opt/spark-2.2.0/logs/

................. 日志量太大 此处省略 ...........................

19/03/16 20:47:52 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is file:/opt/spark-2.2.0/sbin/spark-warehouse

spark-sql>
本文结束感谢您的阅读,本文原创–支持原创
顺便打点赏吧~ 有问题请联系我--strivedeer@163.com