2016-02-03 48 views
0

我在虚拟机中使用Flume 1.6.0,在另一台虚拟机中使用Hadoop 2.7.1。 当我将Avro事件发送到Flume 1.6.0并尝试在Hadoop 2.7.1 HDFS系统上编写代码时。发生异常follwingHDFS IO错误org.apache.hadoop.ipc.RemoteException:服务器IPC版本9无法与客户端版本4通信

(SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:455)] HDFS IO error 
org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4 
    at org.apache.hadoop.ipc.Client.call(Client.java:1113) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) 
    at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source) 
    at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:497) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) 
    at com.sun.proxy.$Proxy6.getProtocolVersion(Unknown Source) 
    at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422) 
    at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) 
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:243) 
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235) 
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679) 
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50) 
    at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Thread 

我尝试通过在水槽LIB夹 =>

Hadoop的共2.7.1.jar

阿夫罗添加这些.jar文件 -1.7.7.jar而不是avro-1.7.4.jar

代替210

阿夫罗-IPC-1.7.7.jar阿夫罗-IPC-1.7.4.jar

番石榴18.0.jar代替番石榴11.0.2.jar

但问题仍未解决。

回答

0

FlumeNG HDFS沉取决于以下的.jar文件

hadoop-auth-2.4.0 jar 
hadoop-common-2.4.0.jar 
hadoop-hdfs-2.4.0.jar 
commons-configuration-1.10.jar 

没有包括在水槽的lib文件夹

通过添加这些.jars,该例外已被成功克服。

相关问题