2017-08-17 103 views
0

我注意到计算模型精度的时间几乎和创建模型本身的时间一样长,这看起来不正确。我有一个包含六台虚拟机的集群。时间上最昂贵的是第一次迭代“for range in item(numClasses)”循环。这背后应该发生什么rdd操作?为什么“MulticlassMetrics”对象的“.precision”方法需要花费太多时间?

代码:

%pyspark 
from pyspark.sql.types import DoubleType 
from pyspark.sql.functions import UserDefinedFunction 
from pyspark.mllib.regression import LabeledPoint 
from pyspark.mllib.tree import DecisionTree 
from pyspark.mllib.evaluation import MulticlassMetrics 
from timeit import default_timer 

def decision_tree(train,test,numClasses,CatFeatInf): 
    ref = default_timer() 
    training_data = train.rdd.map(lambda row: LabeledPoint(row[-1], row[:-1])).persist(StorageLevel.MEMORY_ONLY) 
    testing_data = test.rdd.map(lambda row: LabeledPoint(row[-1], row[:-1])).persist(StorageLevel.MEMORY_ONLY) 
    print 'transformed in dense data in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 
    model = DecisionTree.trainClassifier(training_data, 
             numClasses=numClasses, 
             maxDepth=7, 
             categoricalFeaturesInfo=CatFeatInf, 
             impurity='entropy', maxBins=max(CatFeatInf.values())) 
    print 'model created in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 
    predictions_and_labels = model.predict(testing_data.map(lambda r: r.features)).zip(testing_data.map(lambda r: r.label)) 
    print 'predictions made in: %.3f seconds'%(default_timer()-ref) 

    ref = default_timer() 

    metrics = MulticlassMetrics(predictions_and_labels) 


    res = {} 
    for item in range(numClasses): 
     try: 
      res[item] = metrics.precision(item) 
     except: 
      res[item] = 0.0 
    print 'accuracy calculated in: %.3f seconds'%(default_timer()-ref) 
    return res 

变换在密集数据:0.074秒

模型中创建:0.095秒

精度计算:355.276秒

预测在由:346.497秒

回答

0

当我第一次调用metrics.precision(0)

时可能会执行一些未完成的rdd操作
相关问题