2016-10-04 58 views
0

我想从oracle表中加载数据到cassandra表。我试图按照Datastax网站上运行Sqoop演示文档中提到的相同步骤 - https://docs.datastax.com/en/datastax_enterprise/4.5/datastax_enterprise/ana/anaSqpDemo.htmlissus使用sqoop导入工具从oracle到cassandra

在这里,我使用Oracle数据库instaed的MySQL。使用Datastax企业版5.0.2。

dse sqoop cql-import --connect 'jdbc:oracle:thin:username/[email protected]//ip_address_of_the_host:PORT/SERVICE_NAME' --table ORACLE_TABLE_NAME --cassandra-keyspace npa_nxx --cassandra-table npa_nxx_data --cassandra-host IP_ADDRESS_CASSANDRA --cassandra-port 9042 --cassandra-column-mapping npa:npa,nxx:nxx,latitude:lat,longitude:lon,state:state,city:city 
Hadoop functionality is deprecated and may be removed in a future release. 
Note: /tmp/sqoop-xxxx/compile/4657cfc531e9676b9013e057157bf522/SSFS_STAGE02_NPA_NXX.java uses or overrides a deprecated API. 
Note: Recompile with -Xlint:deprecation for details. 
ERROR 13:45:08,987 Encountered IOException running import job: java.io.IOException: Failed to read the table metadata 
     at com.datastax.bdp.sqoop.SqoopUtil.setCqlImportOptions(SqoopUtil.java:186) 
     at com.datastax.bdp.sqoop.CqlImportJob.configureOutputFormat(CqlImportJob.java:120) 
     at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:240) 
     at com.datastax.bdp.sqoop.SqoopUtil.importTable(SqoopUtil.java:587) 
     at com.datastax.bdp.sqoop.SqlManagerAdapter.importTable(SqlManagerAdapter.java:222) 
     at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) 
     at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601) 
     at org.apache.sqoop.Sqoop.run(Sqoop.java:143) 
     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) 
     at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) 
     at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) 
     at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) 
     at org.apache.sqoop.Sqoop.main(Sqoop.java:236) 
     at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57) 
Caused by: java.io.IOException: --cassandra-column-mapping contains an SQL column city that does not exist in the SQL table or query 
     at com.datastax.bdp.sqoop.SqoopUtil.validateColumnConsistency(SqoopUtil.java:312) 
     at com.datastax.bdp.sqoop.SqoopUtil.setCqlImportOptions(SqoopUtil.java:168) 
     ... 13 more 

我测试了oracle的jdbc连接,看起来不错。

请帮我理解这个问题,并欢迎任何建议。

感谢 拉哈夫

回答

0

错误

--cassandra-column-mapping contains an SQL column city that does not exist in the SQL table or query 

暗示你的Oracle表没有正确地建立,你应该仔细检查是否有正确的模式。

+0

感谢RussS的答复。我已经使用SqlDeveloper创建了Oracle表。我可以看到oracle和cassandra表中的所有列。 – rraghav84

+0

看起来不像他们匹配,我相信他们区分大小写 – RussS

+0

谢谢RussS。是的,他们区分大小写。我改变了列映射,现在,我面临着不同的问题: – rraghav84

0

这里是Oracle表的DDL:

CREATE TABLE "SCHEMA_NAME"."NPA_NXX" 
    ( "NPA_NXX_KEY" NUMBER(*,0) NOT NULL ENABLE, 
    "NPA" NUMBER(*,0), 
    "NXX" NUMBER(*,0), 
    "LAT" FLOAT(126), 
    "LON" FLOAT(126), 
    "LINETYPE" CHAR(1 BYTE), 
    "STATE" VARCHAR2(2 BYTE), 
    "CITY" VARCHAR2(36 BYTE), 
    CONSTRAINT "NPA_NXX_PK" PRIMARY KEY ("NPA_NXX_KEY") 
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) 
    TABLESPACE "XXXXXX_DATA" ENABLE 
    ) SEGMENT CREATION IMMEDIATE 
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
NOCOMPRESS LOGGING 
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) 
    TABLESPACE "XXXXX_DATA" ; 

卡桑德拉密钥空间和表:

cqlsh> CREATE KEYSPACE npa_nxx WITH REPLICATION = 
     {'class':'NetworkTopologyStrategy', 'Analytics':1}; 

cqlsh> CREATE TABLE npa_nxx.npa_nxx_data (npa int, nxx int, 
     latitude float, longitude float, state text, city text, 
     PRIMARY KEY(npa, nxx)); 
0
dse sqoop cql-import --connect 'jdbc:oracle:thin:username/[email protected]//ip_address_of_the_host:PORT/SERVICE_NAME' --table ORACLE_TABLE_NAME --cassandra-keyspace npa_nxx --cassandra-table npa_nxx_data --cassandra-host IP_ADDRESS_CASSANDRA --cassandra-port 9042 --cassandra-column-mapping npa:NPA,nxx:NXX,latitude:LAT,longitude:LON,state:STATE,city:CITY 


Hadoop functionality is deprecated and may be removed in a future release. 
Note: /tmp/sqoop-tmhmadm/compile/06a746beb5d4af8ac13b60568fedbcbd/SSFS_STAGE02_NPA_NXX.java uses or overrides a deprecated API. 
Note: Recompile with -Xlint:deprecation for details. 
WARN 11:23:58,503 Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
java.lang.Throwable: Child Error 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) 
Caused by: java.io.IOException: Task process exit with nonzero status of -1. 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) 

java.lang.Throwable: Child Error 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) 
Caused by: java.io.IOException: Task process exit with nonzero status of -1. 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) 

java.lang.Throwable: Child Error 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) 
Caused by: java.io.IOException: Task process exit with nonzero status of -1. 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) 

java.lang.Throwable: Child Error 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271) 
Caused by: java.io.IOException: Task process exit with nonzero status of -1. 
     at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258) 

attempt_201610061121_0001_m_000001_0: 11:24:15,977 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy] 
attempt_201610061121_0001_m_000001_0: 11:24:15,978 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml] 
attempt_201610061121_0001_m_000001_0: 11:24:15,979 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/etc/dse/cassandra/logback.xml] 
attempt_201610061121_0001_m_000001_0: 11:24:16,412 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug attribute not set 
attempt_201610061121_0001_m_000001_0: 11:24:16,418 |-INFO in ReconfigureOnChangeFilter{invocationCounter=0} - Will scan for changes in [[/etc/dse/cassandra/logback.xml]] every 60 seconds. 
attempt_201610061121_0001_m_000001_0: 11:24:16,418 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - Adding ReconfigureOnChangeFilter as a turbo filter 
attempt_201610061121_0001_m_000001_0: 11:24:16,432 |-INFO in ch.qos.logback.classic.joran.action.JMXConfiguratorAction - begin 
attempt_201610061121_0001_m_000001_0: 11:24:16,617 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender] 
attempt_201610061121_0001_m_000001_0: 11:24:16,624 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [SYSTEMLOG] 
attempt_201610061121_0001_m_000001_0: 11:24:17,683 |-INFO in [email protected] - Will use zip compression 
attempt_201610061121_0001_m_000001_0: 11:24:17,835 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property 
attempt_201610061121_0001_m_000001_0: 11:24:17,919 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[SYSTEMLOG] - Active log file name: cassandra.logdir_IS_UNDEFINED/system.log 
attempt_201610061121_0001_m_000001_0: 11:24:17,919 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[SYSTEMLOG] - File property is set to [cassandra.logdir_IS_UNDEFINED/system.log] 
attempt_201610061121_0001_m_000001_0: 11:24:17,921 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender] 
attempt_201610061121_0001_m_000001_0: 11:24:17,921 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [DEBUGLOG] 
. 
. 
. 

. 

attempt_201610061121_0001_m_000002_2: 11:24:50,116 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.thinkaurelius.thrift] to ERROR 
attempt_201610061121_0001_m_000002_2: 11:24:50,116 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.rolling.RollingFileAppender] 
attempt_201610061121_0001_m_000002_2: 11:24:50,116 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [SolrValidationErrorAppender] 
attempt_201610061121_0001_m_000002_2: 11:24:50,117 |-INFO in [email protected] - Will use zip compression 
attempt_201610061121_0001_m_000002_2: 11:24:50,117 |-WARN in [email protected] - Large window sizes are not allowed. 
attempt_201610061121_0001_m_000002_2: 11:24:50,118 |-WARN in [email protected] - MaxIndex reduced to 21 
attempt_201610061121_0001_m_000002_2: 11:24:50,118 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[SolrValidationErrorAppender] - Active log file name: cassandra.logdir_IS_UNDEFINED/solrvalidation.log 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.core.rolling.RollingFileAppender[SolrValidationErrorAppender] - File property is set to [cassandra.logdir_IS_UNDEFINED/solrvalidation.log] 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [SolrValidationErrorLogger] to ERROR 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting additivity of logger [SolrValidationErrorLogger] to false 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [SolrValidationErrorAppender] to Logger[SolrValidationErrorLogger] 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.datastax.bdp.search.solr.metrics.MetricsWriteEventListener] to DEBUG 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.core.CassandraSolrConfig] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.core.SolrCore] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.core.RequestHandlers] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.handler.component] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.search.SolrIndexSearcher] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.solr.update] to WARN 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [org.apache.lucene.index] to INFO 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [com.cryptsoft] to OFF 
attempt_201610061121_0001_m_000002_2: 11:24:50,120 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration. 
attempt_201610061121_0001_m_000002_2: 11:24:50,121 |-INFO in [email protected] - Registering current configuration as safe fallback point 
attempt_201610061121_0001_m_000002_2: INFO 11:24:51,231 NativeCodeLoader.java:43 - Loaded the native-hadoop library 
ERROR 11:25:28,355 Error during import: Import job failed!