- 7.2. Building
- 7.2.1. Initial Setup
- 7.2.2. Building
- 7.2.3. Installing an ElasticSearch server
- 7.2.4. Deploying the generated binary package
- 7.2.5. Deploying into an existing Karaf server
- 7.2.6. JDK Selection on Mac OS X
- 7.2.7. Running the integration tests
- 7.2.8. Running the performance tests
- 7.2.9. Testing with an example page
- 7.2.10. Integrating onto a page
7.2. Building
7.2.1. Initial Setup
Install J2SE 8.0 SDK (or later), which can be downloaded fromhttp://www.oracle.com/technetwork/java/javase/downloads/index.html
Make sure that your JAVA_HOME environment variable is set to the newly installedJDK location, and that your PATH includes %JAVA_HOME%\bin (windows) or$JAVA_HOME$/bin (unix).
Install Maven 3.0.3 (or later), which can be downloaded fromhttp://maven.apache.org/download.html. Make sure that your PATH includesthe MVN_HOME/bin directory.
7.2.2. Building
Get the code:
git clone https://git-wip-us.apache.org/repos/asf/incubator-unomi.git
Change to the top level directory of Apache Unomi source distribution.
Run
$> mvn clean install
This will compile Apache Unomi and run all of the tests in the Apache Unomi source distribution. Alternatively, you can run
$> mvn -P \!integration-tests,\!performance-tests clean install
This will compile Apache Unomi without running the tests and takes less time to build.
- The distributions will be available under "package/target" directory.
7.2.3. Installing an ElasticSearch server
Starting with version 1.2, Apache Unomi no longer embeds an ElasticSearch server as this is no longer supported bythe developers of ElasticSearch. Therefore you will need to install a standalone ElasticSearch using the following steps:
Download an ElasticSearch version. Here’s the version you will need dependingon your version of Apache Unomi.
Apache Unomi <= 1.2 : https://www.elastic.co/downloads/past-releases/elasticsearch-5-1-2Apache Unomi >= 1.3 : https://www.elastic.co/downloads/past-releases/elasticsearch-5-6-3
Uncompress the downloaded package into a directory
In the config/elasticsearch.yml file, uncomment and modify the following line :
cluster.name: contextElasticSearch
Launch the server using
bin/elasticsearch (Mac, Linux)
bin\elasticsearch.bat (Windows)
Check that the ElasticSearch is up and running by accessing the following URL :
7.2.4. Deploying the generated binary package
The "package" sub-project generates a pre-configured Apache Karaf installation that is the simplest way to get started.Simply uncompress the package/target/unomi-VERSION.tar.gz (for Linux or Mac OS X) or package/target/unomi-VERSION.zip (for Windows) archive into the directory of your choice.
You can then start the server simply by using the command on UNIX/Linux/MacOS X :
./bin/karaf
or on Windows shell :
bin\karaf.bat
You will then need to launch (only on the first Karaf start) the Apache Unomi packages using the following Apache Karafshell command:
unomi:start
7.2.5. Deploying into an existing Karaf server
This is only needed if you didn’t use the generated package. Also, this is the preferred way to install a developmentenvironment if you intend to re-deploy the context server KAR iteratively.
Additional requirements:* Apache Karaf 3.x, http://karaf.apache.org
Before deploying, make sure that you have Apache Karaf properly installed. You will also have to increase thedefault maximum memory size and perm gen size by adjusting the following environment values in the bin/setenv(.bat)files (at the end of the file):
MY_DIRNAME=`dirname $0`
MY_KARAF_HOME=`cd "$MY_DIRNAME/.."; pwd`
export JAVA_MAX_MEM=3G
export JAVA_MAX_PERM_MEM=384M
Install the WAR support, CXF and Karaf Cellar into Karaf by doing the following in the Karaf command line:
feature:repo-add cxf 3.0.2
feature:repo-add cellar 3.0.3
feature:repo-add mvn:org.apache.unomi/unomi-kar/VERSION/xml/features
feature:install unomi-kar
Create a new $MY_KARAF_HOME/etc/org.apache.cxf.osgi.cfg file and put the following property inside :
org.apache.cxf.servlet.context=/cxs
If all went smoothly, you should be able to access the context script here : http://localhost:8181/cxs/cluster . You should be able to login with karaf / karaf and see basic server information. If not something went wrong during the install.
7.2.6. JDK Selection on Mac OS X
You might need to select the JDK to run the tests in the itests subproject. In order to do so you can list theinstalled JDKs with the following command :
/usr/libexec/java_home -V
which will output something like this :
Matching Java Virtual Machines (7):
1.7.0_51, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home
1.7.0_45, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home
1.7.0_25, x86_64: "Java SE 7" /Library/Java/JavaVirtualMachines/jdk1.7.0_25.jdk/Contents/Home
1.6.0_65-b14-462, x86_64: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_65-b14-462.jdk/Contents/Home
1.6.0_65-b14-462, i386: "Java SE 6" /Library/Java/JavaVirtualMachines/1.6.0_65-b14-462.jdk/Contents/Home
1.6.0_65-b14-462, x86_64: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
1.6.0_65-b14-462, i386: "Java SE 6" /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
You can then select the one you want using :
export JAVA_HOME=`/usr/libexec/java_home -v 1.7.0_51`
and then check that it was correctly referenced using:
java -version
which should give you a result such as this:
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
7.2.7. Running the integration tests
The integration tests are not executed by default to make build time minimal, but it is recommended to run theintegration tests at least once before using the server to make sure that everything is ok in the build. Another wayto use these tests is to run them from a continuous integration server such as Jenkins, Apache Gump, Atlassian Bamboo or others.
Note : the integration tests require a JDK 7 or more recent !
To run the tests simply activate the following profile :
mvn -P integration-tests clean install
7.2.8. Running the performance tests
Performance tests are based on Gatling. You need to have a running context server or cluster of servers beforeexecuting the tests.
Test parameteres are editable in the performance-tests/src/test/scala/unomi/Parameters.scala file. baseUrls shouldcontains the URLs of all your cluster nodes
Run the test by using the gatling.conf file in performance-tests/src/test/resources :
export GATLING_CONF=<path>/performance-tests/src/test/resources
gatling.sh
Reports are generated in performance-tests/target/results.
7.2.9. Testing with an example page
A default test page is provided at the following URL:
http://localhost:8181/index.html
This test page will trigger the loading of the /context.js script, which will try to retrieving the user contextor create a new one if it doesn’t exist yet. It also contains an experimental integration with Facebook Login, but itdoesn’t yet save the context back to the context server.
7.2.10. Integrating onto a page
Simply reference the context script in your HTML as in the following example:
<script type="text/javascript">
(function(){ var u=(("https:" ==== document.location.protocol) ? "https://localhost:8181/" : "http://localhost:8181/");
var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0]; g.type='text/javascript'; g.defer=true; g.async=true; g.src=u+'context.js';
s.parentNode.insertBefore(g,s); })();
</script>