You are looking at documentation for an older release. Not what you want? See the current release documentation.
What LockManager does?
In general, LockManager stores Lock objects, so it can give a Lock object or can release it. Also, LockManager is responsible for removing Locks that live too long. This parameter may be configured with "time-out" property.
JCR provides one basic implementations of LockManager:
org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl
ISPNCacheableLockManagerImpl
stores Lock objects in Infinispan, so Locks are replicable and affect on cluster, not only a single node.
Also, Infinispan has a JdbcStringBasedStore
, so Locks will be stored to the database.
You can enable LockManager by adding lock-manager-configuration
to workspace-configuration.
For example:
<workspace name="ws">
...
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
<properties>
<property name="time-out" value="15m" />
...
</properties>
</lock-manager>
...
</workspace>
Where time-out
parameter represents interval to remove Expired Locks. LockRemover separates threads, that periodically ask
LockManager to remove Locks that live so long.
The configuration uses the template Infinispan configuration for all LockManagers.
The lock template configuration:
test-infinispan-lock.xml
:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="urn:infinispan:config:8.2"
xsi:schemaLocation="urn:infinispan:config:8.2 http://www.infinispan.org/schemas/infinispan-config-8.2.xsd">
<threads>
<thread-factory name="infinispan-factory" group-name="infinispan" thread-name-pattern="%G %i" priority="5"/>
<!-- listener-executor -->
<blocking-bounded-queue-thread-pool name="infinispan-listener" thread-factory="infinispan-factory" core-threads="1"
max-threads="5" queue-length="0" keepalive-time="0"/>
</threads>
<jgroups transport="org.infinispan.remoting.transport.jgroups.JGroupsTransport">
<stack-file name="stack" path="${exo.jcr.cluster.jgroups.config}"/>
</jgroups>
<cache-container name="lock-manager" default-cache="default" listener-executor="infinispan-listener"
statistics="true">
<jmx duplicate-domains="true" domain="jcr.ispn.cache" mbean-server-lookup="org.infinispan.jmx.PlatformMBeanServerLookup"/>
<transport cluster="${exo.cluster.partition.name}-jcr-lock" stack="stack" lock-timeout="240000"/>
<replicated-cache-configuration mode="SYNC" name="default" statistics="true" remote-timeout="${exo.jcr.cluster.lock.sync.repltimeout:240000}">
<locking isolation="READ_COMMITTED" concurrency-level="500" striping="false" write-skew="false"
acquire-timeout="${exo.jcr.lock.lockacquisitiontimeout:180000}"/>
<transaction transaction-manager-lookup="org.exoplatform.services.transaction.infinispan.JBossStandaloneJTAManagerLookup" mode="NON_XA"/>
<state-transfer enabled="${exo.jcr.cluster.lock.statetransfer.fetchinmemorystate:false}"
timeout="${exo.jcr.cluster.lock.statetransfer.timeout:240000}"/>
<eviction strategy="NONE" />
<expiration lifespan="-1" />
<persistence passivation="false">
<string-keyed-jdbc-store xmlns="urn:infinispan:config:store:jdbc:8.0" shared="true" fetch-state="true"
read-only="false" purge="false" preload="true">
<string-keyed-table drop-on-exit="${infinispan-cl-cache.jdbc.table.drop}" create-on-start="${infinispan-cl-cache.jdbc.table.create}"
prefix="${infinispan-cl-cache.jdbc.table.name}">
<id-column name="${infinispan-cl-cache.jdbc.id.column}" type="${infinispan-cl-cache.jdbc.id.type}" />
<data-column name="${infinispan-cl-cache.jdbc.data.column}" type="${infinispan-cl-cache.jdbc.data.type}" />
<timestamp-column name="${infinispan-cl-cache.jdbc.timestamp.column}" type="${infinispan-cl-cache.jdbc.timestamp.type}" />
</string-keyed-table>
</string-keyed-jdbc-store>
</persistence>
</replicated-cache-configuration>
</cache-container>
</infinispan>
To prevent any consistency issue regarding the lock data, please ensure that your cache loader is
org.infinispan.persistence.jdbc.stringbased.JdbcStringBasedStore
and that your database engine is transactional.
For more information about JdbcStringBasedStore
, refer to this
link.
As you see, all configurable parameters are filled by templates and will be replaced by LockManagers configuration parameters:
<lock-manager class="org.exoplatform.services.jcr.impl.core.lock.infinispan.ISPNCacheableLockManagerImpl">
<properties>
<property name="time-out" value="15m" />
<property name="infinispan-configuration" value="conf/standalone/cluster/test-infinispan-lock.xml" />
<property name="jgroups-configuration" value="udp-mux.xml" />
<property name="infinispan-cluster-name" value="JCR-cluster" />
<property name="infinispan-cl-cache.jdbc.table.name" value="lk" />
<property name="infinispan-cl-cache.jdbc.table.create" value="true" />
<property name="infinispan-cl-cache.jdbc.table.drop" value="false" />
<property name="infinispan-cl-cache.jdbc.id.column" value="id" />
<property name="infinispan-cl-cache.jdbc.data.column" value="data" />
<property name="infinispan-cl-cache.jdbc.timestamp.column" value="timestamp" />
<property name="infinispan-cl-cache.jdbc.datasource" value="jdbcjcr" />
<property name="infinispan-cl-cache.jdbc.dialect" value="${dialect}" />
<property name="infinispan-cl-cache.jdbc.connectionFactory" value="org.exoplatform.services.jcr.infinispan.ManagedConnectionFactory" />
</properties>
</lock-manager>
Configuration requirements:
infinispan-cl-cache.jdbc.id.type
, infinispan-cl-cache.jdbc.data.type
and
infinispan-cl-cache.jdbc.timestamp.type
are injected in the Infinispan configuration into the property respectively
idColumnType
, dataColumnType
and timestampColumnType
. You can set those data types according to your
database type or set it as AUTO (or do not set at all) and data type will be detected automatically.
jgroups-configuration
is moved to separate the configuration file udp-mux.xml
.
In this case, the udp-mux.xml
file is a common JGroup configuration for all components (QueryHandler, Cache, LockManager),
but we can still create our own configuration.
our udp-mux.xml
:
<config>
<UDP
singleton_name="JCR-cluster"
mcast_addr="${jgroups.udp.mcast_addr:228.10.10.10}"
mcast_port="${jgroups.udp.mcast_port:45588}"
tos="8"
ucast_recv_buf_size="20000000"
ucast_send_buf_size="640000"
mcast_recv_buf_size="25000000"
mcast_send_buf_size="640000"
loopback="false"
discard_incompatible_packets="true"
max_bundle_size="64000"
max_bundle_timeout="30"
use_incoming_packet_handler="true"
ip_ttl="${jgroups.udp.ip_ttl:2}"
enable_bundling="false"
enable_diagnostics="true"
thread_naming_pattern="cl"
use_concurrent_stack="true"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="8"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="1000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Run" />
<PING timeout="2000"<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.2.xsd">
<UDP
singleton_name="JCR-cluster"
mcast_port="${jgroups.udp.mcast_port:45588}"
tos="8"
ucast_recv_buf_size="20M"
ucast_send_buf_size="640K"
mcast_recv_buf_size="25M"
mcast_send_buf_size="640K"
loopback="true"
max_bundle_size="64K"
max_bundle_timeout="30"
ip_ttl="${jgroups.udp.ip_ttl:8}"
enable_bundling="true"
enable_diagnostics="true"
thread_naming_pattern="cl"
timer_type="old"
timer.min_threads="4"
timer.max_threads="10"
timer.keep_alive_time="3000"
timer.queue_max_size="500"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="8"
thread_pool.keep_alive_time="5000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="10000"
thread_pool.rejection_policy="discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="1"
oob_thread_pool.max_threads="8"
oob_thread_pool.keep_alive_time="5000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Run"/>
<PING timeout="2000"
num_initial_members="20"/>
<MERGE2 max_interval="30000"
min_interval="10000"/>
<FD_SOCK/>
<FD_ALL/>
<VERIFY_SUSPECT timeout="1500" />
<BARRIER />
<pbcast.NAKACK2 xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="2000"
xmit_table_max_compaction_time="30000"
max_msg_batch_size="500"
use_mcast_xmit="false"
discard_delivered_msgs="true"/>
<UNICAST xmit_interval="2000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="2000"
xmit_table_max_compaction_time="60000"
conn_expiry_timeout="60000"
max_msg_batch_size="500"/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="4M"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000"
view_bundling="true"/>
<UFC max_credits="2M"
min_threshold="0.4"/>
<MFC max_credits="2M"
min_threshold="0.4"/>
<FRAG2 frag_size="60K" />
<RSVP resend_interval="2000" timeout="10000"/>
<pbcast.STATE_TRANSFER />
<!-- pbcast.FLUSH /-->
</config>
Data, id and timestamp type in different databases:
DataBase name | Data type | Id type | Timestamp type |
---|---|---|---|
HSQL | VARBINARY(65535) | VARCHAR(512) | BIGINT |
MySQL | LONGBLOB | VARCHAR(512) | BIGINT |
ORACLE | BLOB | VARCHAR2(512) | NUMBER(19, 0) |
PostgreSQL/PostgrePlus | bytea | VARCHAR(512) | BIGINT |
MSSQL | VARBINARY(MAX) | VARCHAR(512) | BIGINT |