2016-03-03 152 views
0

我正在使用Datanucleus来执行CRUD。我删除一个实体,然后执行命名查询,为什么已经删除的实体仍然在结果列表中?Datanucleus JPA命名查询返回被删除的实体

首先,删除实体:

MyEntity e = manager.find(MyEntity.class, id); 
manager.remove(e); 

然后,查询:

@NamedQueries({ 
     @NamedQuery(name = MyEntity.FIND_ALL, query = "SELECT a FROM MyEntity a ORDER BY a.updated DESC") 
}) 
public static final String FIND_ALL = "MyEntity.findAll"; 
TypedQuery<MyEntity> query = manager.createNamedQuery(FIND_ALL, MyEntity.class); 
return query.getResultList(); 

配置datanucleus.Optimistic的persistence.xml:

<property name="datanucleus.Optimistic" value="true" /> 

命名查询将返回意外的其中包含已删除实体的结果列表。如果datanucleus.Optimistic=false,那么结果是正确的。为什么datanucleus.Optimistic=true不起作用?

有关这个案子的详细信息:

下面是CRUD相关的日志:

1.保存操作的日志:

DEBUG: DataNucleus.Transaction - Transaction begun for ExecutionContext [email protected] (optimistic=true) 
INFO : org.springframework.test.context.transaction.TransactionalTestExecutionListener - Began transaction (1): transaction manager [[email protected]]; rollback [true] 
DEBUG: DataNucleus.Persistence - Making object persistent : "[email protected]" 
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 1 cache [cache size = 0] 
DEBUG: DataNucleus.Cache - Object with id "com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" not found in Level 2 cache 
DEBUG: DataNucleus.Persistence - Managing Persistence of Class : com.demo.MyEntity [Table : (none), InheritanceStrategy : superclass-table] 
DEBUG: DataNucleus.Cache - Object "[email protected]a65f" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") added to Level 1 cache (loadedFlags="[YNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN]") 
DEBUG: DataNucleus.Lifecycle - Object "[email protected]" (id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24") has a lifecycle change : "HOLLOW"->"P_NONTRANS" 
DEBUG: DataNucleus.Persistence - Fetching object "[email protected]" (id=07cad778-d1c3-4834-ace7-ac2e4ecacc24) fields [entityId,extensions,objectType,openSocial,published,updated,url,actor,appId,bcc,bto,cc,content,context,dc,endTime,generator,geojson,groupId,icon,inReplyTo,ld,links,location,mood,object,odata,opengraph,priority,provider,rating,result,schema_org,source,startTime,tags,target,title,to,userId,verb] 
DEBUG: DataNucleus.Datastore.Retrieve - Object "[email protected]" (id="07cad778-d1c3-4834-ace7-ac2e4ecacc24") being retrieved from HBase 
DEBUG: org.apache.hadoop.hbase.zookeeper.ZKUtil - hconnection opening connection to ZooKeeper with ensemble (master.hbase.com:2181) 

.... 
DEBUG: org.apache.hadoop.hbase.client.MetaScanner - Scanning .META. starting at row=MyEntity,,00000000000000 for max=10 rows using org.apache.h[email protected]25c7f5b0 
... 
DEBUG: DataNucleus.Cache - Object with id="com.demo.MyEntity:07cad778-d1c3-4834-ace7-ac2e4ecacc24" being removed from Level 1 cache [current cache size = 1] 
DEBUG: DataNucleus.ValueGeneration - Creating ValueGenerator instance of "org.datanucleus.store.valuegenerator.UUIDGenerator" for "uuid" 
DEBUG: DataNucleus.ValueGeneration - Reserved a block of 1 values 
DEBUG: DataNucleus.ValueGeneration - Generated value for field "com.demo.BaseEntity.entityId" using strategy="custom" (Generator="org.datanucleus.store.valuegenerator.UUIDGenerator") : value=4aa3c4a8-b450-473e-aeba-943dc6ef30ce 
DEBUG: DataNucleus.Cache - Object "[email protected]" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") added to Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") 
DEBUG: DataNucleus.Transaction - Object "[email protected]" (id="4aa3c4a8-b450-473e-aeba-943dc6ef30ce") enlisted in transactional cache 
DEBUG: DataNucleus.Persistence - Object "[email protected]" has been marked for persistence but its actual persistence to the datastore will be delayed due to use of optimistic transactions or "datanucleus.flush.mode" setting 

2.登录DELETE操作:

DEBUG: DataNucleus.Cache - Object "[email protected]" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1] 
DEBUG: DataNucleus.Persistence - Deleting object from persistence : "[email protected]" 
DEBUG: DataNucleus.Lifecycle - Object "[email protected]" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") has a lifecycle change : "P_NEW"->"P_NEW_DELETED" 

3.登录指定的查询操作:

DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.query.cache.SoftQueryCompilationCache" initialised 
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryDatastoreCompilationCache" initialised 
DEBUG: DataNucleus.Cache - Query Cache of type "org.datanucleus.store.query.cache.SoftQueryResultsCache" initialised 
DEBUG: DataNucleus.Query - JPQL Single-String with "SELECT a FROM MyEntity a ORDER BY a.updated DESC" 
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process started using optimised flush - 0 to delete, 1 to insert and 0 to update 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #7 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #7 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: exists 0 
DEBUG: DataNucleus.Datastore.Persist - Object "[email protected]" being inserted into HBase with all reachable objects 
DEBUG: DataNucleus.Datastore.Native - Object "[email protected]" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"} 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #8 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #8 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: multi 2 
DEBUG: DataNucleus.Datastore.Persist - Execution Time = 123 ms 
DEBUG: DataNucleus.Persistence - ExecutionContext.internalFlush() process finished 
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC" 
DEBUG: DataNucleus.Query - JPQL Query : Compile Time = 13 ms 
DEBUG: DataNucleus.Query - QueryCompilation: 
    [from:ClassExpression(alias=a)] 
    [ordering:OrderExpression{PrimaryExpression{a.updated} descending}] 
    [symbols: a type=com.demo.MyEntity] 
DEBUG: DataNucleus.Query - JPQL Query : Compiling "SELECT a FROM MyEntity a ORDER BY a.updated DESC" for datastore 
DEBUG: DataNucleus.Query - JPQL Query : Compile Time for datastore = 2 ms 
DEBUG: DataNucleus.Query - JPQL Query : Executing "SELECT a FROM MyEntity a ORDER BY a.updated DESC" ... 
DEBUG: DataNucleus.Datastore.Native - Retrieving objects for candidate=com.demo.MyEntity and subclasses 
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Creating scanner over MyEntity starting at key '' 
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Advancing internal scanner to startKey at '' 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #9 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #9 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: openScanner 1 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #10 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #10 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0 
DEBUG: DataNucleus.Cache - Object "[email protected]" (id="com.demo.MyEntity:4aa3c4a8-b450-473e-aeba-943dc6ef30ce") taken from Level 1 cache (loadedFlags="[YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY]") [cache size = 1] 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #11 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #11 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: next 0 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 sending #12 
DEBUG: org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to namenode.hbase.com/192.168.1.99:60020 from user1 got value #12 
DEBUG: org.apache.hadoop.ipc.RPCEngine - Call: close 1 
DEBUG: org.apache.hadoop.hbase.client.ClientScanner - Finished with scanning at {NAME => 'MyEntity,,1457106265917.c6437b9afd33cd225c33e0ed52ff50d4.', STARTKEY => '', ENDKEY => '', ENCODED => c6437b9afd33cd225c33e0ed52ff50d4,} 
DEBUG: DataNucleus.Query - JPQL Query : Processing the "ordering" clause using in-memory evaluation (clause = "[OrderExpression{PrimaryExpression{a.updated} descending}]") 
DEBUG: DataNucleus.Query - JPQL Query : Processing the "resultClass" clause using in-memory evaluation (clause = "com.demo.MyEntity") 
DEBUG: DataNucleus.Query - JPQL Query : Execution Time = 14 ms 

为什么以下日志(与生命周期 “P_NEW_DELETED” 到数据存储PUT实体)查询操作中出现?以及如何避免这种行为?

DEBUG: DataNucleus.Datastore.Persist - Object "[email protected]" being inserted into HBase with all reachable objects 
DEBUG: DataNucleus.Datastore.Native - Object "[email protected]" PUT into HBase table "MyEntity" as {"totalColumns":3,"families":{"MyEntity":[{"timestamp":9223372036854775807,"qualifier":"DTYPE","vlen":8},{"timestamp":9223372036854775807,"qualifier":"userId","vlen":5},{"timestamp":9223372036854775807,"qualifier":"entityId","vlen":36}]},"row":"4aa3c4a8-b450-473e-aeba-943dc6ef30ce"} 
+1

是否有问题? –

+0

创建一个“NAMED”查询意味着你已经在某个文件中定义了一个命名查询。你有吗?在哪种情况下呢? –

+0

是的,我更新了我的帖子。谢谢! – Michael

回答

1

您打开乐观事务,因此所有数据写入操作只发生在提交。您在执行查询之前执行了查询(并且未为查询设置刷新模式),因此在执行查询时,您的删除不在数据存储区中。

呼叫

em.flush() 

执行查询之前,或者设置

query.setFlushMode(FlushModeType.AUTO);