我正在使用Azure Data Factory来定期将数据从MySQL导入到Azure SQL数据仓库。如何使用Azure Data Factory将数据从MySQL逐步导入Azure数据仓库?
数据通过Azure存储帐户上的临时blob存储,但是当我运行管道时,它失败了,因为它无法将blob文本分离回列。管道试图插入目标的每一行都将成为一个长字符串,其中包含由“⯑”字符分隔的所有列值。
我之前使用过数据工厂,没有尝试增量机制,它工作正常。我没有看到会导致这种行为的原因,但我可能错过了一些东西。
我附上描述管道的JSON,附带一些小的命名更改,请让我知道是否看到任何可以解释这一点的内容。
谢谢!
编辑:添加异常消息:
Failed execution Database operation failed. Error message from database execution : ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error happened when loading data into SQL Data Warehouse.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Data.SqlClient.SqlException,Message=Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 1 rows processed. (/f4ae80d1-4560-4af9-9e74-05de941725ac/Data.8665812f-fba1-407a-9e04-2ee5f3ca5a7e.txt) Column ordinal: 27, Expected data type: VARCHAR(45) collate SQL_Latin1_General_CP1_CI_AS, Offending value:* ROW OF VALUES * (Tokenization failed), Error: Not enough columns in this line.,},],'.
{
"name": "CopyPipeline-move_incremental_test",
"properties": {
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "$$Text.Format('select * from [table] where InsertTime >= \\'{0:yyyy-MM-dd HH:mm}\\' AND InsertTime < \\'{1:yyyy-MM-dd HH:mm}\\'', WindowStart, WindowEnd)"
},
"sink": {
"type": "SqlDWSink",
"sqlWriterCleanupScript": "$$Text.Format('delete [schema].[table] where [InsertTime] >= \\'{0:yyyy-MM-dd HH:mm}\\' AND [InsertTime] <\\'{1:yyyy-MM-dd HH:mm}\\'', WindowStart, WindowEnd)",
"allowPolyBase": true,
"polyBaseSettings": {
"rejectType": "Value",
"rejectValue": 0,
"useTypeDefault": true
},
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
},
"translator": {
"type": "TabularTranslator",
"columnMappings": "column1:column1,column2:column2,column3:column3"
},
"enableStaging": true,
"stagingSettings": {
"linkedServiceName": "StagingStorage-somename",
"path": "somepath"
}
},
"inputs": [
{
"name": "InputDataset-input"
}
],
"outputs": [
{
"name": "OutputDataset-output"
}
],
"policy": {
"timeout": "1.00:00:00",
"concurrency": 10,
"style": "StartOfInterval",
"retry": 3,
"longRetry": 0,
"longRetryInterval": "00:00:00"
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "Activity-0-_Custom query_->[schema]_[table]"
}
],
"start": "2017-06-01T05:29:12.567Z",
"end": "2099-12-30T22:00:00Z",
"isPaused": false,
"hubName": "datafactory_hub",
"pipelineMode": "Scheduled"
}
}
你能提供更多的步骤和例外吗? –
执行失败 数据库操作失败。来自数据库执行的错误消息:ErrorCode = FailedDbOperation,'Type = Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message =将数据加载到SQL数据仓库时发生错误,Source = Microsoft.DataTransfer.ClientLibrary,'Type = System。 Data.SqlClient。SqlException,Message = Query中止 - 从外部源读取时达到最大拒绝阈值(0行):在处理的总共1行中被拒绝1行。 (/f4ae80d1-4560-4af9-9e74-05de941725ac/Data.8665812f-fba1-407a-9e04-2ee5f3ca5a7e.txt) – PandaZ
列序号:27,预期的数据类型:VARCHAR(45)collate SQL_Latin1_General_CP1_CI_AS,出错值:*过多的字符FOR RESPONSE *(Tokenization失败),错误:此行中没有足够的列。,},],'。 – PandaZ