An implementation of Lucene's
org.apache.lucene.store.Directory which uses Infinispan to store Lucene indexes.
As the RAMDirectory the data is stored in memory, but provides some additional flexibility:
Passivation, LRU or LIRS Bigger indexes can be configured to passivate cleverly selected chunks of data to a cache store.
This can be a local filesystem, a network filesystem, a database or custom cloud stores like S3. See Infinispan's core documentation for a full list of available implementations, or
org.infinispan.persistence.spi.CacheWriter to implement more.
Non-volatile memory The contents of the index can be stored in it's entirety in such a store, so that on shutdown or crash of the system data is not lost.
A copy of the index will be copied to the store in sync or async depending on configuration; In case you enable
Infinispan's clustering even in case of async the segments are always duplicated synchronously to other nodes, so you can
benefit from good reliability even while choosing the asynchronous mode to write the index to the slowest store implementations.
Real-time change propagation All changes done on a node are propagated at low latency to other nodes of the cluster; this was designed especially for
interactive usage of Lucene, so that after an IndexWriter commits on one node new IndexReaders opened on any node of the cluster
will be able to deliver updated search results.
Distributed heap Infinispan acts as a shared heap for the purpose of total memory consumption, so you can avoid hitting the slower disks even
if the total size of the index can't fit in the memory of a single node: network is faster than disks, especially if the index
is bigger than the memory available to cache it.
Distributed locking
As default Lucene Directory implementations a global lock needs to protect the index from having more than an IndexWriter open; in case of a
replicated or distributed index you need to enable a cluster-wide
org.apache.lucene.store.LockFactory.
This implementation uses by default
org.infinispan.lucene.locking.BaseLockFactory; in case you want to apply changes during a JTA transaction
see also
org.infinispan.lucene.locking.TransactionalLockFactory.
Combined store patterns It's possible to combine different stores and passivation policies, so that each nodes shares the index changes
quickly to other nodes, offloads less frequently used data to a per-node local filesystem, and the cluster also coordinates to keeps a safe copy on a shared store.