我正在使用线程池来完成一些工作。我的池大小刚好8 但我得到了以下错误:java.lang.OutOfMemoryError:无法创建遇到的新本地线程
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to create new native thread
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
at java.util.concurrent.FutureTask.get(FutureTask.java:83)
at com.TransferFiles.transferFilesToHadoop(TransferFiles.java:88)
at com.TransferJob.execute(TransferJob.java:25)
at org.quartz.core.JobRunShell.run(JobRunShell.java:202)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:525)
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:555)
at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:184)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1140)
at org.apache.hadoop.ipc.Client.call(Client.java:986)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at $Proxy1.getFileInfo(Unknown Source)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at $Proxy1.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:676)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:521)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:692)
at org.apache.hadoop.fs.FileUtil.checkDest(FileUtil.java:349)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:205)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1119)
at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1095)
at com.FileCopyRoutine.call(TransferFiles.java:325)
at com.FileCopyRoutine.call(TransferFiles.java:257)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
我使用:
private static final ExecutorService exec = Executors.newFixedThreadPool(TransferToHadoopUtilities.numOfThreads);
numOfThreads = 8
Future<Boolean> futureTask = exec.submit(new FileCopyRoutine(srcSubResultPath, destSubResultPath,execId));
FileCopyRoutine实现调用。我可能会同时进行数百次提交。 任何人都可以请给我一些关于这个错误的提示吗?
非常感谢!
我添加了“hadoop”标记,因为它似乎与此相关。 – home 2012-02-29 19:47:35
好吧,显然jvm已经用完了空闲的内存。你有没有分配任何额外的内存,或者你是否用默认的内存设置启动你的应用程序? – 2012-02-29 19:47:52
你提交给游泳池的工作有多少?它真的是100秒或更多? – Gray 2012-02-29 19:48:45