Liferay
6.1 Clustering using mod_jk Connector
Clustering allows us to run portal instances on several
parallel servers. The load is distributed across different servers, and even if
any of the servers fails, the portal is still accessible via other cluster
nodes. Clustering is crucial for scalable portal enterprise, as you can improve
performance by simply adding more nodes to the cluster.
For larger installations, you would likely need a clustered
configuration in order to handle the traffic of popular website. A cluster
allows us to distribute the traffic coming in to website to several machines.
Sure the cluster allows websites to handle more web traffic at a faster pace
than it would be possible with a single machine.Definitely the portal works
well in a clustered environment.
It is known fact that if we serve static resources like
images, java script, css etc. through web server then we can improve response
time. This fact also applies to Liferay portal as well. By default static
resources of Portal, Portlet and Themes are served through Liferay portal server.
Liferay portal adds certain overhead like servlet filters to serve these static
resources. If we serve these static resources directly from web server then we
can improve the response time. In majority of the production deployments web
server is used in front of Liferay portal servers.
Use case: Setting up
two Liferay tomcat instance which points to the same database along with Apache
server on same windows machine
Pre-requisite:
·
Install MySQL
Configuration
Steps:
These configuration steps are based on the assumption that we
have two Liferay instance running on different port using same MySQL database on
windows machine.
Step 1:
Copy static resources to apache web server
As a first step we need to copy static resources on Apache
Web server document root directory.
·
Copy all the directories or plugins except ROOT
directory from Liferay_Home/tomcat/webapp/ to Apache webserver document root
Apache 2.2/htdocs/.
·
Copy html
and layouttpl from
Liferay_home/tomcat/webapps/ROOT/ to Apache 2.2/htdocs
Note: We copied whole directories
just to make it simple. Ideally we should copy only static resource files like
image, css and javascripts including their folder structure. For actual
deployment a script can be written to do this task.
Step 2: Configure mod_jk
Mod_jk is the connector used to connect Tomcat JSP container
with web servers. Configure it with Apache web server, you first of download it
from here.
Download the connector for correct
version of Apache Web server depending upon the OS and Hardware configuration.
Step 3: Define
mod_jk in Apache 2.2
Copy the mod_jk.so file from the downloaded connector to
Apache2.2/modules/
Step 4: Create
a properties file for defining cluster nodes
Create a workers.properties file under Apache2.2/conf/ with
following properties
# Define
list of workers that will be used
# for
mapping requests
worker.list=tomcat1,tomcat2,loadbalancer,status
# Define
Tomcat1
# modify
the host as your host IP or DNS name.
worker.tomcat1.port=8010
// AJP Connector port of tomcat1
worker.tomcat1.host=localhost
// host address of tomcat1
worker.tomcat1.type=ajp13
//AJP version type
worker.tomcat1.cachesize=10
worker.tomcat1.lbfactor=1
worker.tomcat1.socket_timeout=60
worker.tomcat1.connection_pool_timeout=60
worker.tomcat1.ping_mode=A
worker.tomcat1.ping_timeout=20000
worker.tomcat1.connect_timeout=20000
# Define
Tomcat2
# modify
the host as your host IP or DNS name.
worker.tomcat2.port=8011
// AJP connector port of tomcat2
worker.tomcat2.host=localhost
// host address of tomcat2
worker.tomcat2.type=ajp13
// AJP version type
worker.tomcat1.cachesize=10
worker.tomcat2.lbfactor=1
worker.tomcat2.socket_timeout=60
worker.tomcat2.connection_pool_timeout=60
worker.tomcat2.ping_mode=A
worker.tomcat2.ping_timeout=20000
worker.tomcat2.connect_timeout=20000
#
Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=tomcat1,tomcat2
// define the tomcat node to manage load balancer
worker.loadbalancer.sticky_session=1
// type of sticky session
# Status
worker for managing load balancer
worker.status.type=status
As shown in above code, mod_jk uses a file named
workers.properties, defining where Apache looks for the Tomcat instances.
worker.list is a comma-separated list of worker names. Each worker needs to
define the port on which the connector is configured to work, e.g., 8010 for tomcat1
and 8011 for tomcat2.
Step 5: Create mod_jk configuration file in Apache 2.2 configuration
directory
Create a new httpd-mod_jk.conf file under Apache 2.2/conf/extra
with the following configuration properties
LoadModule
jk_module modules/mod_jk.so //load mod_jk connector
JkWorkersFile
conf/workers.properties //load workers.properties file that you created with
your custom configuration
JkLogFile
logs/mod_jk.log //define the log file path
JkLogStampFormat
"[%a %b %d %H:%M:%S %Y]" //define
the format
JkLogLevel
info //Define the log level
JkShmFile
logs/jk-runtime-status
JkMount
/* loadbalancer
Jkunmount
/*.jpg loadbalancer //load jpg files directory from Apache 2.2/htdocs for
better performance
Jkunmount
/*.gif loadbalancer //load gif files directory from Apache 2.2/htdocs for
better performance
Jkunmount
/*.png loadbalancer //load png files directory from Apache 2.2/htdocs for
better performance
Jkunmount
/*.ico loadbalancer //load ico files directory from Apache 2.2/htdocs for
better performance
Jkunmount
/*.js loadbalancer //load js files directory from Apache 2.2/htdocs for better
performance
Jkunmount
/*.css loadbalancer //load css files directory from Apache 2.2/htdocs for
better performance
In this file we loaded mod_jk
connector in apache web server and provided reference to worker.properties
file. Which tells apache mod_jk connector to connect with Liferay tomcat
server. We also provided log file configuration. Then we mapped all the
requests to mod_jk connector by adding JkMount /* loadbalancer. Which means all
the request received by apache web server will be delegated to any of the Liferay
portal server. Then we unmount static resource requests from tomcat server by
adding Jkunmount lines. So altogether this configuration tells that all
requests except static resource requests should be served by tomcat and static
resource requests should be served by apache web server from its document root.
It can improve the response time at
least 20-30%.
Step 6: Define
Virtual host address to route to load balancer
Open httpd-vhosts.conf file under Apache 2.2/conf/extra/
directory and take a backup of whole contents to some another file and replace
the whole content with following lines
NameVirtualHost
*:80
NameVirtualHost
*:81
<VirtualHost
*:80>
JkMount /* loadbalancer
</VirtualHost>
<VirtualHost
*:81>
JkMount /* loadbalancer
</VirtualHost>
Step 7: Modify
httpd.conf file to do Clustering and load mod_jk in apache webserver.
Modify httpd.conf file under Apache 2.2/conf directory.
§ Normally
Apache http server runs on port 80 or 8080, but if this port is busy sometimes
as another windows processes runs on this server, so if you want to modify the
port then search the keyword “Listen 8080” or “Listen 80” and replace it with “Listen
81”.
§ Uncomment
the following lines
LoadModule
proxy_module modules/mod_proxy.so
LoadModule
proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule
proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule
status_module modules/mod_status.so
Include
conf/extra/httpd-vhosts.conf
§ Add the
following lines at the end of the file
<Proxy balancer://mycluster>
BalancerMember
ajp://localhost:8010/ route=tomcat1 smax=15 max=50 loadfactor=20
BalancerMember
ajp://localhost:8011/ route=tomcat2 smax=15 max=50 loadfactor=20
</Proxy>
<Location / >
ProxyPass
balancer://mycluster/ stickysession=JSESSIONID
</Location>
<Location /balancer-manager>
SetHandler
balancer-manager
Order Deny,Allow
Deny from all
Allow from localhost
</Location>
Include conf/extra/httpd-mod_jk.conf
Step 8:
Configure Database
·
It’s assumed that you already have installed MySQL on
your machine. Then go to your mysql command prompt and create a new database
with the any name that you want.
create database
liferay_loadbalancer;
Step 9: Configure portal-ext.properties file for both Liferay
instances with following properties
# MySQL
jdbc.default.driverClassName=com.mysql.jdbc.Driver
jdbc.default.url=jdbc:mysql://localhost/liferay_loadbalancer?useUnicode=true&characterEncoding=UTF-8&useFastDateParsing=false
jdbc.default.username=root
jdbc.default.password=root
# EhCaching and clustering properties
net.sf.ehcache.configurationResourceName=/ehcache/hibernate-clustered.xml
// ehCache configure file path for clustered environment
ehcache.multi.vm.config.location=/ehcache/liferay-multi-vm-clustered.xml
// ehCache configure file path for multi VM environment
cluster.link.enabled=true // enable cluster link for indexing
cluster and other features that depend on cluster link
lucene.replicate.write=true //setting to true if you want the
portal to replicate an index write across all memebers of cluster. This is
useful in some clustered environments where you wish each server instance to
have its own copy of the Lucene search index. This is only relevant when using
the default Lucene indexing engine.
Step 10: Configure server.xml file for first Liferay
instances
§ Change the server
port to 8006
<Server port="8006"
shutdown="SHUTDOWN">
§ Change the
connector port to 8081 and redirectPort to 8444
<Connector port="8081"
protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8444" URIEncoding="UTF-8" />
§ Change the
AJP connector port to 8010 and redirectPort to 8444. Make sure that URIEncoding
must match to "UTF-8"
<Connector port="8010"
protocol="AJP/1.3" redirectPort="8444"
URIEncoding="UTF-8" />
§ add
jvmRoute="tomcat1" to following line
<Engine name="Catalina" defaultHost="localhost"
jvmRoute="tomcat1" >
§ Replace the
following line
<Cluster
className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> with
this code
<Cluster
className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">
<Manager
className="org.apache.catalina.ha.session.BackupManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"
mapSendOptions="6"/>
<Channel
className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4" port="45564"
frequency="500" dropTime="3000"/>
<Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto"
port="5000" selectorTimeout="100"
maxThreads="6"/>
<Sender
className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport
className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve
className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Step 11: Configure server.xml file for second Liferay
instances
§ Change the server
port to 8007
<Server port="8007"
shutdown="SHUTDOWN">
§ Change the
connector port to 8082 and redirectPort to 8445
<Connector port="8082"
protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8445" URIEncoding="UTF-8" />
§ Change the
AJP connector port to 8011 and redirectPort to 8444. Make sure that URIEncoding
must match to "UTF-8"
<Connector port="8011" protocol="AJP/1.3"
redirectPort="8445" URIEncoding="UTF-8" />
§ add
jvmRoute="tomcat2" to following line
<Engine name="Catalina" defaultHost="localhost"
jvmRoute="tomcat2" >
§ Replace the
following line
<Cluster
className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> with
this code
<Cluster
className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="6">
<Manager
className="org.apache.catalina.ha.session.BackupManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"
mapSendOptions="6"/>
<Channel
className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4" port="45564"
frequency="500" dropTime="3000"/>
<Receiver
className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="auto"
port="5000" selectorTimeout="100"
maxThreads="6"/>
<Sender
className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport
className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor
className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve
className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.png;.*\.htm;.*\.html;.*\.css;.*\.txt;"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
Step 12: Configure
Context.xml file of both Liferay instances
In order to enable session replication, edit
Liferay_Home/tomcat/conf/context.xml file and update <Context> with <Context distributable="true"/>.
Do this configuration for both Liferay instances.
That’s done.
Now start apache, tomcat 1 and tomcat 2 server.
Once all servers started, go to browser and hit http://localhost:81/. It will load any one of
the Liferay instance page (http://localhost:8081/) or (http://localhost:8082/).
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.