2017-09-28 91 views
1

我已经按照教程中的hadoop安装和Flume的所有步骤进行了操作。 我在大数据工具中很无聊。我收到以下错误。我不明白,问题在哪里?使用FLUME将数据存储在Hadoop中

我也读了很多关于安装的帖子,但我仍然面临这个问题。我的最终目标是使用R执行Twitter情绪分析。

17/09/29 02:25:39 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting 
17/09/29 02:25:39 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:/home/shivam/apache-flume-1.6.0-bin/conf/flume.conf 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Added sinks: HDFS Agent: TwitterAgent 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Processing:HDFS 
17/09/29 02:25:39 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [TwitterAgent] 
17/09/29 02:25:39 INFO node.AbstractConfigurationProvider: Creating channels 
17/09/29 02:25:39 INFO channel.DefaultChannelFactory: Creating instance of channel MemChannel type memory 
17/09/29 02:25:39 INFO node.AbstractConfigurationProvider: Created channel MemChannel 
17/09/29 02:25:39 INFO source.DefaultSourceFactory: Creating instance of source Twitter, type org.apache.flume.source.twitter.TwitterSource 
17/09/29 02:25:39 INFO twitter.TwitterSource: Consumer Key:  'fRw12aumIqkAWD6PP5ZHk7vva' 
17/09/29 02:25:39 INFO twitter.TwitterSource: Consumer Secret:  'K9K0yL2pwngp3JXEdMGWUOEB7AaGWswXcq72WveRvnD4ZSphNQ' 
17/09/29 02:25:39 INFO twitter.TwitterSource: Access Token:  '771287280438968320-XnbtNtBt40cs6gUOk6F9bjgmUABM0qG' 
17/09/29 02:25:39 INFO twitter.TwitterSource: Access Token Secret: 'afUppGRqcRi2p9fzLhVdYQXkfMEm72xduaWD6uNs3HhKg' 
17/09/29 02:25:39 INFO sink.DefaultSinkFactory: Creating instance of sink: HDFS, type: hdfs 
17/09/29 02:25:39 INFO node.AbstractConfigurationProvider: Channel MemChannel connected to [Twitter, HDFS] 
17/09/29 02:25:39 INFO node.Application: Starting new configuration:{ sourceRunners:{Twitter=EventDrivenSourceRunner: { source:org.apache.flume.source.twitter.TwitterSource{name:Twitter,state:IDLE} }} sinkRunners:{HDFS=SinkRunner: { policy:[email protected] counterGroup:{ name:null counters:{} } }} channels:{MemChannel=org.apache.flume.channel.MemoryChannel{name: MemChannel}} } 
17/09/29 02:25:39 INFO node.Application: Starting Channel MemChannel 
17/09/29 02:25:39 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: MemChannel: Successfully registered new MBean. 
17/09/29 02:25:39 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: MemChannel started 
17/09/29 02:25:39 INFO node.Application: Starting Sink HDFS 
17/09/29 02:25:39 INFO node.Application: Starting Source Twitter 
17/09/29 02:25:39 INFO twitter.TwitterSource: Starting twitter source org.apache.flume.source.twitter.TwitterSource{name:Twitter,state:IDLE} ... 
17/09/29 02:25:39 INFO twitter.TwitterSource: Twitter source Twitter started. 
17/09/29 02:25:39 INFO twitter4j.TwitterStreamImpl: Establishing connection. 
17/09/29 02:25:39 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SINK, name: HDFS: Successfully registered new MBean. 
17/09/29 02:25:39 INFO instrumentation.MonitoredCounterGroup: Component type: SINK, name: HDFS started 
17/09/29 02:25:42 INFO twitter4j.TwitterStreamImpl: Connection established. 
17/09/29 02:25:42 INFO twitter4j.TwitterStreamImpl: Receiving status stream. 
17/09/29 02:25:42 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false 
17/09/29 02:25:42 INFO hdfs.BucketWriter: Creating hdfs://localhost:9000/user/flume/tweets/FlumeData.1506632142370.tmp 
17/09/29 02:25:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
17/09/29 02:25:44 WARN hdfs.HDFSEventSink: HDFS IO error 
java.net.ConnectException: Call From maverick/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) 
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1480) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1407) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) 
    at com.sun.proxy.$Proxy13.create(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy14.create(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1623) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1703) 
    at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1638) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787) 
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776) 
    at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86) 
    at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113) 
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246) 
    at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235) 
    at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679) 
    at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50) 
    at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
    at java.lang.Thread.run(Thread.java:748) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707) 
    at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1446) 
    ... 34 more 
17/09/29 02:25:45 INFO twitter.TwitterSource: Processed 100 docs 
17/09/29 02:25:45 INFO hdfs.BucketWriter: Creating hdfs://localhost:9000/user/flume/tweets/FlumeData.1506632142371.tmp 
17/09/29 02:25:45 WARN hdfs.HDFSEventSink: HDFS IO error 
java.net.ConnectException: Call From maverick/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 
17/09/29 02:25:48 INFO twitter.TwitterSource: Processed 200 docs 
17/09/29 02:25:50 INFO twitter.TwitterSource: Processed 300 docs 
17/09/29 02:25:50 INFO hdfs.BucketWriter: Creating hdfs://localhost:9000/user/flume/tweets/FlumeData.1506632142373.tmp 
17/09/29 02:25:50 WARN hdfs.HDFSEventSink: HDFS IO error 
java.net.ConnectException: Call From maverick/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 

是否有任何完整的解决方案。我可以重新从头开始。

回答

0

Flume正尝试连接到Hadoop的Namenode,它应该在localhost:9000处听取,但没有成功。

这种行为是正确的:Hadoop的Namenode通常在TCP/8020或TCP/9000端口监听与Hadoop文件系统(HDFS)相关的进程间通信(IPC)。而且,默认情况下,Flume试图连接到TCP/9000。

你能确认你有这样的进程在localhost上运行,并在TCP/9000上侦听吗?您可以使用lsofnetstat命令来完成此操作。您还应该检查Hadoop配置,以检查Hadoop为Namenode的IPC打开的端口。

然后,有两种选择:

  • 你改变你的Namenode的IPC监听端口为9000这是通过在core-site.xml文件中配置fs.default.name属性来完成。
  • 您配置Flume以连接到您在Hadoop上配置的端口。这是通过将接收器的hdfs.path属性配置为hdfs://127.0.0.1:<your_port>/your/path/来完成的。
+0

我已经尝试过,但仍然出现错误@frb – Shivam

+0

我发布了另一个问题,再次进行了安装。 https://stackoverflow.com/questions/46530583/fetching-twitter-data-using-flume – Shivam

+0

我已经使用了一些端口来安装hadoop,比如50070,namenode正在监听其他一些端口。 在我的flume.conf中,它是9000端口。 – Shivam

相关问题