2016-07-25 93 views
1

我在处理超过2700个文件时遇到问题 如果我有一些文件像几百个一样,我就会猜测它与限制打开的窗口有关像linux中的ulimit文件可以定义在系统范围内。我相信事情没有被关闭,这就是我得到这个错误的原因。打开的文件太多urllib

我有一个通过邮寄发送文件功能:

def upload_photos(url_photo, dict, timeout): 
    photo = dict['photo'] 
    data_photo = dict['data'] 
    name = dict['name'] 
    conn = requests.post(url_photo, data=data_photo, files=photo, timeout=timeout) 
    return {'json': conn.json(), 'name': name} 

这是从一个目录列表的循环中调用:

for photo_path in [p.lower() for p in photos_path]: 
     if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path: 
      nr_photos_upload +=1 
    print("Found " + str(nr_photos_upload) + " pictures to upload") 
    local_count = 0 
    list_to_upload = [] 
    for photo_path in [p.lower() for p in photos_path]: 
     local_count += 1 
     if ('jpg' in photo_path or 'jpeg' in photo_path) and "thumb" not in photo_path and local_count > count: 
      total_img = nr_photos_upload 
      photo_name = os.path.basename(photo_path) 
      try : 
       photo = {'photo': (photo_name, open(path + photo_path, 'rb'), 'image/jpeg')} 
       try: 
        latitude, longitude, compas = get_gps_lat_long_compass(path + photo_path) 
       except ValueError as e: 
        if e != None: 
         try: 
          tags = exifread.process_file(open(path + photo_path, 'rb')) 
          latitude, longitude = get_exif_location(tags) 
          compas = -1 
         except Exception: 
          continue 
       if compas == -1: 
        data_photo = {'coordinate' : str(latitude) + "," + str(longitude), 
           'sequenceId'  : id_sequence, 
           'sequenceIndex' : count 
           } 
       else : 
        data_photo = {'coordinate' : str(latitude) + "," + str(longitude), 
           'sequenceId'  : id_sequence, 
           'sequenceIndex' : count, 
           'headers'   : compas 
           } 
       info_to_upload = {'data': data_photo, 'photo':photo, 'name': photo_name} 
       list_to_upload.append(info_to_upload) 
       count += 1 
      except Exception as ex: 
       print(ex) 
    count_uploaded = 0 
    with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor: 
     # Upload feature called from here 
     future_to_url = {executor.submit(upload_photos, url_photo, dict, 100): dict for dict in list_to_upload} 
     for future in concurrent.futures.as_completed(future_to_url): 
      try: 
       data = future.result()['json'] 
       name = future.result()['name'] 
       print("processing {}".format(name)) 
       if data['status']['apiCode'] == "600": 

        percentage = float((float(count_uploaded) * 100)/float(total_img)) 
        print(("Uploaded - " + str(count_uploaded) + ' of total :' + str(
         total_img) + ", percentage: " + str(round(percentage, 2)) + "%")) 
       elif data['status']['apiCode'] == "610": 
        print("skipping - a requirement arguments is missing for upload") 
       elif data['status']['apiCode'] == "611": 
        print("skipping - image does not have GPS location metadata") 
       elif data['status']['apiCode'] == "660": 
        print("skipping - duplicate image") 
       else : 
        print("skipping - bad image") 
       count_uploaded += 1 
       with open(path + "count_file.txt", "w") as fis: 
        fis.write((str(count_uploaded))) 
      except Exception as exc: 
       print('%generated an exception: %s' % (exc)) 
+0

那么,问题是太多的文件simulatanious处理。可能是解决问题的最简单方法如下:如果有任何问题,请勿停止,但等待几毫秒后重复(注意避免无限循环)。在这种情况下,所有的文件将被处理。 – Ilya

+0

在一般情况下,同时上传少于几个文件是非常有益的,特别是当您锤击相同的服务器时。简化同时连接的数量。 –

回答

1

您可以设置_setmaxstdio在C更改号码可以一次打开的文件。

对于Python,你必须使用win32filepywin32为:

import win32file 
win32file._setmaxstdio(1024) #set max number of files to 1024 

默认为512。并确保您检查您设置的最大值是否由您的平台支持。

参考:https://msdn.microsoft.com/en-us/library/6e3b887c.aspx

+1

“打开文件的最大数量”是那种现在的限制足够高,应该让你质疑你的算法,没有办法改变限制... –

+1

这个选项不起作用,Matteo说,它不是最佳做法是做这样的事情 – James