2014-03-04 71 views
2

任务定义:我需要测试自定义并发集合或某些在并发环境中使用集合进行操作的容器。 更准确地说 - 我已经读过API和写入API。我应该测试是否有任何可能导致数据不一致的情况。并发测试:测试用例场景自动化

问题:所有并发测试框架(MultiThreadedTC,看看我的问题的MultiThreadedTc部分)刚刚为您提供控制异步代码执行顺序的能力。我的意思是你应该假设你自己的一个关键场景。

广泛的问题:是否有框架可以像@SharedResource,@readAPI,@writeAPI这样的注释并检查您的数据是否始终保持一致?这是不可能的,或者我只是泄露了一个创业想法?

注释:如果没有这样的框架,但你觉得这个想法有吸引力,欢迎与我联系或提出你的想法。

缩小问题:我是并发新手。那么你能否建议我在下面的代码中测试哪些场景? (看PeerContainer类)

PeerContainer:

public class PeersContainer { 

    public class DaemonThreadFactory implements ThreadFactory { 

     private int counter = 1; 
     private final String prefix = "Daemon"; 

     @Override 
     public Thread newThread(Runnable r) { 
      Thread thread = new Thread(r, prefix + "-" + counter); 
      thread.setDaemon(true); 
      counter++; 
      return thread; 
     } 
    } 

    private static class CacheCleaner implements Runnable { 

     private final Cache<Long, BlockingDeque<Peer>> cache; 

     public CacheCleaner(Cache<Long, BlockingDeque<Peer>> cache) { 
      this.cache = cache; 
      Thread.currentThread().setDaemon(true); 
     } 

     @Override 
     public void run() { 
      cache.cleanUp(); 
     } 
    } 

    private final static int MAX_CACHE_SIZE = 100; 
    private final static int STRIPES_AMOUNT = 10; 
    private final static int PEER_ACCESS_TIMEOUT_MIN = 30; 
    private final static int CACHE_CLEAN_FREQUENCY_MIN = 1; 

    private final static PeersContainer INSTANCE; 

    private final Cache<Long, BlockingDeque<Peer>> peers = CacheBuilder.newBuilder() 
      .maximumSize(MAX_CACHE_SIZE) 
      .expireAfterWrite(PEER_ACCESS_TIMEOUT_MIN, TimeUnit.MINUTES) 
      .removalListener(new RemovalListener<Long, BlockingDeque<Peer>>() { 
       public void onRemoval(RemovalNotification<Long, BlockingDeque<Peer>> removal) { 
        if (removal.getCause() == RemovalCause.EXPIRED) { 
         for (Peer peer : removal.getValue()) { 
          peer.sendLogoutResponse(peer); 
         } 
        } 
       } 
      }) 
      .build(); 
    private final Striped<Lock> stripes = Striped.lock(STRIPES_AMOUNT); 
    private final ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1, new DaemonThreadFactory()); 

    private PeersContainer() { 
     scheduledExecutorService.schedule(new CacheCleaner(peers), CACHE_CLEAN_FREQUENCY_MIN, TimeUnit.MINUTES); 
    } 

    static { 
     INSTANCE = new PeersContainer(); 
    } 

    public static PeersContainer getInstance() { 
     return INSTANCE; 
    } 

    private final Cache<Long, UserAuthorities> authToRestore = CacheBuilder.newBuilder() 
      .maximumSize(MAX_CACHE_SIZE) 
      .expireAfterWrite(PEER_ACCESS_TIMEOUT_MIN, TimeUnit.MINUTES) 
      .build(); 

    public Collection<Peer> getPeers(long sessionId) { 
     return Collections.unmodifiableCollection(peers.getIfPresent(sessionId)); 
    } 

    public Collection<Peer> getAllPeers() { 
     BlockingDeque<Peer> result = new LinkedBlockingDeque<Peer>(); 
     for (BlockingDeque<Peer> deque : peers.asMap().values()) { 
      result.addAll(deque); 
     } 
     return Collections.unmodifiableCollection(result); 
    } 

    public boolean addPeer(Peer peer) { 
     long key = peer.getSessionId(); 
     Lock lock = stripes.get(key); 
     lock.lock(); 
     try { 
      BlockingDeque<Peer> userPeers = peers.getIfPresent(key); 
      if (userPeers == null) { 
       userPeers = new LinkedBlockingDeque<Peer>(); 
       peers.put(key, userPeers); 
      } 
      UserAuthorities authorities = restoreSession(key); 
      if (authorities != null) { 
       peer.setAuthorities(authorities); 
      } 
      return userPeers.offer(peer); 
     } finally { 
      lock.unlock(); 
     } 
    } 

    public void removePeer(Peer peer) { 
     long sessionId = peer.getSessionId(); 
     Lock lock = stripes.get(sessionId); 
     lock.lock(); 
     try { 
      BlockingDeque<Peer> userPeers = peers.getIfPresent(sessionId); 
      if (userPeers != null && !userPeers.isEmpty()) { 
       UserAuthorities authorities = userPeers.getFirst().getAuthorities(); 
       authToRestore.put(sessionId, authorities); 
       userPeers.remove(peer); 
      } 
     } finally { 
      lock.unlock(); 
     } 
    } 

    void removePeers(long sessionId) { 
     Lock lock = stripes.get(sessionId); 
     lock.lock(); 
     try { 
      peers.invalidate(sessionId); 
      authToRestore.invalidate(sessionId); 
     } finally { 
      lock.unlock(); 
     } 
    } 

    private UserAuthorities restoreSession(long sessionId) { 
     BlockingDeque<Peer> activePeers = peers.getIfPresent(sessionId); 
     return (activePeers != null && !activePeers.isEmpty()) ? activePeers.getFirst().getAuthorities() : authToRestore.getIfPresent(sessionId); 
    } 

    public void resetAccessedTimeout(long sessionId) { 
     Lock lock = stripes.get(sessionId); 
     lock.lock(); 
     try { 
      BlockingDeque<Peer> deque = peers.getIfPresent(sessionId); 
      peers.invalidate(sessionId); 
      peers.put(sessionId, deque); 
     } finally { 
      lock.unlock(); 
     } 
    } 
} 

MultiThreadedTC测试用例样品:[问题的可选部分]

public class ProducerConsumerTest extends MultithreadedTestCase { 
    private LinkedTransferQueue<String> queue; 

    @Override 
    public void initialize() { 
     super.initialize(); 
     queue = new LinkedTransferQueue<String>(); 
    } 

    public void thread1() throws InterruptedException { 
     String ret = queue.take(); 
    } 

    public void thread2() throws InterruptedException { 
     waitForTick(1); 
     String ret = queue.take(); 
    } 

    public void thread3() { 
     waitForTick(1); 
     waitForTick(2); 
     queue.put("Event 1"); 
     queue.put("Event 2"); 
    } 

    @Override 
    public void finish() { 
     super.finish(); 
     assertEquals(true, queue.size() == 0); 
    } 
} 

回答

2

听起来像是静态分析工作,不测试,除非你有时间运行多万亿个测试用例。几乎不能测试多线程行为 - 在单个线程中测试行为,然后证明线程错误的缺失。

尝试:

http://www.contemplateltd.com/threadsafe

http://checkthread.org/

+0

为什么呢?试想一下,你决定从头编写自己的'ConcurrentHashMap'。它有一个共享表,3个读取方法和3个写入方法。您可以按不同的顺序在不同的线程中启动这些方法。这里面临的主要挑战是如何减少100个可能的用例到10个关键。 –

+0

测试用例需要与实现字节码序列的可能数量成比例,而不是方法调用。即使对于只有100个VM操作码和仅具有4个核心的机器的小型班级,它也比银河系中所有恒星的沙滩上的沙粒数量更多...... – soru