2012-02-23 55 views
10

我想知道是否有可能有一个可选的数组。 让我们假设一个架构是这样的:avro架构中的可选阵列

{ 
    "type": "record", 
    "name": "test_avro", 
    "fields" : [ 
     {"name": "test_field_1", "type": "long"}, 
     {"name": "subrecord", "type": [{ 
     "type": "record", 
     "name": "subrecord_type", 
      "fields":[{"name":"field_1", "type":"long"}] 
      },"null"] 
    }, 
    {"name": "simple_array", 
    "type":{ 
     "type": "array", 
     "items": "string" 
     } 
    } 
    ] 
} 

尝试写没有“simple_array”将导致在datafilewriter一个NPE的Avro的记录。 对于子记录它只是罚款,但是当我尝试了数组定义为可选:

{"name": "simple_array", 
"type":[{ 
    "type": "array", 
    "items": "string" 
    }, "null"] 

它不会导致NPE,但运行时异常:

AvroRuntimeException: Not an array schema: [{"type":"array","items":"string"},"null"] 

感谢。

回答

17

我想你想要的这里是工会零和数组:

{ 
    "type":"record", 
    "name":"test_avro", 
    "fields":[{ 
      "name":"test_field_1", 
      "type":"long" 
     }, 
     { 
      "name":"subrecord", 
      "type":[{ 
        "type":"record", 
        "name":"subrecord_type", 
        "fields":[{ 
          "name":"field_1", 
          "type":"long" 
         } 
        ] 
       }, 
       "null" 
      ] 
     }, 
     { 
      "name":"simple_array", 
      "type":["null", 
       { 
        "type":"array", 
        "items":"string" 
       } 
      ], 
      "default":null 
     } 
    ] 
} 

当我使用与Python示例数据上面的架构,这里的结果(schema_string是上面的JSON字符串):

>>> from avro import io, datafile, schema 
>>> from json import dumps 
>>> 
>>> sample_data = {'test_field_1':12L} 
>>> rec_schema = schema.parse(schema_string) 
>>> rec_writer = io.DatumWriter(rec_schema) 
>>> rec_reader = io.DatumReader() 
>>> 
>>> # write avro file 
... df_writer = datafile.DataFileWriter(open("/tmp/foo", 'wb'), rec_writer, writers_schema=rec_schema) 
>>> df_writer.append(sample_data) 
>>> df_writer.close() 
>>> 
>>> # read avro file 
... df_reader = datafile.DataFileReader(open('/tmp/foo', 'rb'), rec_reader) 
>>> print dumps(df_reader.next()) 
{"simple_array": null, "test_field_1": 12, "subrecord": null} 
+0

与java列表有同样的问题,你的答案已经解决了我的问题。谢谢! – forhas 2013-10-21 14:03:46

+0

我得到同样的错误。在我的设置中,我试图使用MapReduce Java程序处理Avro文件。这项工作是成功的。数据管道的下一阶段是在转换后的数据上创建一个配置单元表(avroSerde)。该表也成功创建,但是当我尝试使用hql查询表(这反过来执行mapreduce作业)时,作业失败与“错误:java.lang.RuntimeException:org.apache.hadoop.hive.ql.metadata.HiveException:处理可写时Hive运行时错误” – venBigData 2016-04-25 23:16:24