2017-04-10 2870 views
0

我的项目是maven + intellij。 而我在windows系统下开发。 首先我用的是最新的斯卡拉libray版本2.12.2.For获取类SQLContext等,我不得不进口火花罐子: enter image description here警告:检测到多个版本的scala库?

但后来有人告诉我,如果我想使用这个火花罐子,我不得不降低我的斯卡拉版本,所以我删除libiray并改为2.10 ....但现在当我mvn_clean_install.I得到这个:

[WARNING] Expected all dependencies to require Scala version: 2.11.7 
[WARNING] com.twitter:chill_2.11:0.5.0 requires scala version: 2.11.7 
[WARNING] com.typesafe.akka:akka-remote_2.11:2.3.11 requires scala version: 2.11.7 
[WARNING] com.typesafe.akka:akka-actor_2.11:2.3.11 requires scala version: 2.11.7 
[WARNING] com.typesafe.akka:akka-slf4j_2.11:2.3.11 requires scala version: 2.11.7 
[WARNING] org.apache.spark:spark-core_2.11:1.6.1 requires scala version: 2.11.7 
[WARNING] org.json4s:json4s-jackson_2.11:3.2.10 requires scala version: 2.11.7 
[WARNING] org.json4s:json4s-core_2.11:3.2.10 requires scala version: 2.11.7 
[WARNING] org.json4s:json4s-ast_2.11:3.2.10 requires scala version: 2.11.7 
[WARNING] org.json4s:json4s-core_2.11:3.2.10 requires scala version: 2.11.0 
[WARNING] Multiple versions of scala libraries detected! 
[INFO] E:\...\src\main\scala:-1: info: compiling 
[INFO] Compiling 4 source files to E:\...\target\classes at 1491813951772 
[ERROR] E:\...\qubole\mapreduce\ConvertToParquetFormat.scala:2: error: object sql is not a member of package org.apache.spark 
[ERROR] import org.apache.spark.sql.SQLContext 
[ERROR]      ^
[ERROR] E:\...\qubole\mapreduce\ConvertToParquetFormat.scala:15: error: not found: type SQLContext 
[ERROR] val sqlContext = new SQLContext(sc) 
[ERROR]      ^
[ERROR] E:\...\mapreduce\ConvertToParquetFormat.scala:24: error: value toDF is not a member of org.apache.spark.rdd.RDD[....qubole.mapreduce.ConvertToParquetFormat.OmnitureHit] 
[ERROR] possible cause: maybe a semicolon is missing before `value toDF'? 
[ERROR] .toDF().write.parquet ("file:///C:/Users/Desktop/456") 
[ERROR] ^
[ERROR] three errors found 
[INFO] ------------------------------------------------------------------------ 
[INFO] BUILD FAILURE 
[INFO] ------------------------------------------------------------------------ 
[INFO] Total time: 03:00 min 
[INFO] Finished at: 2017-04-10T16:45:58+08:00 
[INFO] Final Memory: 33M/360M 
[INFO] ------------------------------------------------------------------------ 
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.0:compile (scala-compile-first) on project packages-omniture-qubole-mapreduce: wrap: org.apache.commons.exec.ExecuteException: Process exited with an er 
ror: 1 (Exit value: 1) -> [Help 1] 
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. 
[ERROR] Re-run Maven using the -X switch to enable full debug logging. 
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles: 
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException 

E:\github\mia\packages-omniture-qubole-mapreduce> 

所以我删除了10阶版本,它告诉我我没有一个scala libray.then我添加一个斯卡拉2.11.since它看起来像一些瓶子需要2.11版本仍然Multiple versions of scala libraries detected?

但是,当我按Ctrl + left_cilck单词SQLContext,我可以去页面,但它是匿名的。

enter image description here enter image description here 这是因为我忘记删除旧的jar或libary的内容吗? 这是我的左依赖列表: enter image description hereenter image description hereenter image description here

这是我的pom.xml:

<?xml version="1.0" encoding="UTF-8"?> 
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 
    <modelVersion>4.0.0</modelVersion> 

    <groupId>com.company.www</groupId> 
    <artifactId>packages-omniture-qubole-mapreduce</artifactId> 
    <version>1.0-SNAPSHOT</version> 

    <parent> 
     <groupId>com.company.www.platform</groupId> 
     <artifactId>platform-parent-spark</artifactId> 
     <version>0.1.41</version> 
    </parent> 

    <properties> 
     <spark.mapreduce.mainclass>com.company.www.packages.omniture.qubole.mapreduce.SampleMapReduceJob</spark.mapreduce.mainclass> 
    </properties> 


    <dependencies> 
     <dependency> 
      <groupId>org.apache.spark</groupId> 
      <artifactId>spark-core_${scala.major.minor.version}</artifactId> 
      <version>${spark.version}</version> 
      <scope>provided</scope> 
     </dependency> 
     <dependency> 
      <groupId>com.company.www.commons</groupId> 
      <artifactId>commons-spark</artifactId> 
      <version>[1.0.17, ]</version> 
     </dependency> 
     <dependency> 
      <groupId>com.company.www</groupId> 
      <artifactId>exp-user-interaction-messages-v1</artifactId> 
      <version>[1.4,]</version> 
     </dependency> 
     <dependency> 
      <groupId>org.scalaj</groupId> 
      <artifactId>scalaj-http_${scala.major.minor.version}</artifactId> 
      <version>1.1.4</version> 
     </dependency> 
     <dependency> 
      <groupId>com.google.code.gson</groupId> 
      <artifactId>gson</artifactId> 
      <version>2.3</version> 
     </dependency> 
     <dependency> 
      <groupId>org.parboiled</groupId> 
      <artifactId>parboiled-java</artifactId> 
      <version>1.0.2</version> 
      <scope>test</scope> 
     </dependency> 
    </dependencies> 


    <build> 
     <plugins> 
      <plugin> 
       <artifactId>maven-shade-plugin</artifactId> 
       <version>2.4</version> 
       <executions> 
        <execution> 
         <phase>package</phase> 
         <goals> 
          <goal>shade</goal> 
         </goals> 
         <configuration> 
          <finalName>packages-omniture-qubole-mapreduce</finalName> 
          <shadedArtifactAttached>false</shadedArtifactAttached> 
          <artifactSet> 
           <includes> 
            <include>*:*</include> 
           </includes> 
          </artifactSet> 
          <filters> 
           <filter> 
            <artifact>*:*</artifact> 
            <excludes> 
             <exclude>META-INF/*.SF</exclude> 
             <exclude>META-INF/*.DSA</exclude> 
             <exclude>META-INF/*.RSA</exclude> 
            </excludes> 
           </filter> 
          </filters> 
          <transformers> 
           <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer" /> 
           <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> 
            <resource>reference.conf</resource> 
           </transformer> 
           <transformer implementation="org.apache.maven.plugins.shade.resource.DontIncludeResourceTransformer"> 
            <resource>log4j.properties</resource> 
           </transformer> 
           <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> 
            <mainClass>${spark.mapreduce.mainclass}</mainClass> 
           </transformer> 
          </transformers> 
          <relocations> 
           <relocation> 
            <pattern>org.eclipse.jetty</pattern> 
            <shadedPattern>org.spark-project.jetty</shadedPattern> 
            <includes> 
             <include>org.eclipse.jetty.**</include> 
            </includes> 
           </relocation> 
           <relocation> 
            <pattern>com.google.common</pattern> 
            <shadedPattern>org.spark-project.guava</shadedPattern> 
            <excludes> 
             <exclude>com/google/common/base/Absent*</exclude> 
             <exclude>com/google/common/base/Function</exclude> 
             <exclude>com/google/common/base/Optional*</exclude> 
             <exclude>com/google/common/base/Present*</exclude> 
             <exclude>com/google/common/base/Supplier</exclude> 
            </excludes> 
           </relocation> 
          </relocations> 
         </configuration> 
        </execution> 
       </executions> 
      </plugin> 
     </plugins> 
    </build> 

</project> 
+0

发布您的POM文件。 –

+0

我已添加pom,请帮助我.. – daxue

回答

4

包括以下在POM

<dependency> 
    <groupId>org.apache.spark</groupId> 
    <artifactId>spark-sql_${scala.major.minor.version}</artifactId> 
    <version>${spark.version}</version> 
</dependency> 
+0

这是干什么的? – radbrawler

+0

SqlContext类在spark-core_ *中不可用,它在spark-sql中可用。试试看。 –

+0

是的。我试过了,它建立了成功!但如何当我运行主要方法。我得到一个错误... 2017-04-10 19:52:40 WARN NativeCodeLoader:62 - 无法为您的平台加载本机hadoop库...使用内置的java类如果适用 2017-04-10 19:52:40错误Shell:373 - 无法找到hadoop二进制路径中的winutils二进制文件 java.io.IOException:在Hadoop中找不到可执行文件\ bin \ winutils.exe二进制文件。 我能做些什么..... – daxue

0

这个问题可能那些com.company.www.依赖关系(并没有指定它们的确切版本,你会变得更糟)。他们对一个特定的Scala版本进行硬编码,并且您必须查看它们的依赖关系以查找哪些(查找_2.10,_2.11_2.12后缀)。

假设这些是您公司的软件包,您需要为不同的Scala版本创建单独的工件或解决特定的Scala版本(例如通过为您的所有Spark项目要求共同的父POM)。