· python external library · 12 min read

Setting up a Secure Single Node Elasticsearch server behind Nginx

Elasticsearch by default does not support security features. They are left to the sole discretion of the developer. This guide will help you setup a secure elasticsearch single node server. This is based on days of searching the internet and poring through the alternatives available to seamlessly implement a secure server. As one of the first options, I tried the elasticsearch-jetty plugin but ran into issues like "org.elasticsearch.ElasticsearchIllegalStateException: Can't create an index". Further searching did not turn up any solutions.
The second best solution seemed to be an nginx reverse proxy. While this is a good solution for a single server, it can potentially become a complicated problem for multi-node servers. But my usecase warranted only a single node. So I decided to put Elasticsearch behind an nginx reverse proxy and provide ssl and password based http authentication to secure the server.

We are going to setup Elasticsearch on a Ubuntu server behind nginx as a reverse proxy i.e we route all the traffic to Elasticsearch through nginx.

Setting up Elasticsearch

Setting up JVM

Elasticsearch requires the java runtime to run. We have the option of going with OpenJDK or Oracle Java, recommended by Elasticsearch.

To install Oracle Java, run:

    sudo add-apt-repository ppa:webupd8team/java
    sudo apt-get update
    sudo apt-get install oracle-java7-installer

or

To install OpenJDK, run:

    sudo apt-get update
    sudo apt-get install openjdk-6-jre

####Installing Elasticsearch
The following section guides you to install the Elasticsearch Version 1.1.1 on Ubuntu. For other elasticsearch versions or linux flavours, pl refer http://www.elasticsearch.org/blog/apt-and-yum-repositories/

#####Adding Elasticsearch Repository
First we add the public key. Following which, we will add the repository for the 1.1.x branch

    wget -O - http://packages.elasticsearch.org/GPG-KEY-elasticsearch | sudo apt-key add -

Add the following line to the file /etc/apt/sources.list.d/elasticsearch.list

    deb http://packages.elasticsearch.org/elasticsearch/1.1/debian stable main

Now let us install ElasticSearch by running the following commands.

    sudo apt-get update
    sudo apt-get install elasticsearch

Start Elasticsearch with the command

    sudo service elasticsearch start

or

    sudo /etc/init.d/elasticsearch start

####Testing Elasticsearch
Run the following shell command. Replace "domain.com" by the ip address or domain name where elasticsearch is installed.

    curl http://domain.com:9200 

You should get an output similar to the one below

    {
      "status" : 200,
      "name" : "Richard Parker",
      "version" : {
        "number" : "1.1.2",
        "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
        "build_timestamp" : "2014-05-22T12:27:39Z",
        "build_snapshot" : false,
        "lucene_version" : "4.7"
      },
      "tagline" : "You Know, for Search"
    }

Setting up nginx

Installing nginx (pronounced Engine-X) is simple. Run the following commands in terminal.

    sudo apt-get update
    sudo apt-get install nginx

We need to now start nginx, which can be done using either the command

    sudo service nginx start

or

    sudo /etc/init.d/nginx start

Testing nginx

Visiting the page http://domain.com should result in a page with the title Welcome to nginx!

Setup a domain for elasticsearch

You should setup a domain or subdomain through your Domain Name Registrar to point to the server where elasticsearch is installed. This is beyond the scope of this tutorial.

From this point on, I will refer to that domain as es.domain.com

Routing the requests through nginx

First lets disable Elasticsearch from receiving external requests. In the elasticsearch server, open the file /etc/elasticsearch/elasticsearch.yml for editting.
Comment out the config network.bind_host and network.publish_host.

    #network.bind_host: #some_value
    #network.publish_host: #some_other_value 

Now add the following config to the same file.

    network.host: localhost

Save the file and close it. The above config, sets both the bind_host and publish_host to the value 'localhost'. So Elasticsearch will only bind to the localhost ip address and any request originating at a different ip address will be ignored.

We need to restart Elasticsearch for the above configuration to take effect. Run the following command in the terminal

    sudo service elasticsearch restart

Testing disabling of external requests to elasticsearch

Running the command

    curl http://domain.com:9200

will result in the below error. This is expected as the above is an external request and we have configured the server to not listen at external ip addresses.

    curl: (7) Failed to connect to domain.com port 9200: Connection refused

If you login into the elasticsearch server and run the below command in the terminal, it should produce a valid output.

    curl http://localhost.com:9200

should result in

    {
      "status" : 200,
      "name" : "Agent",
      "version" : {
        "number" : "1.1.2",
        "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
        "build_timestamp" : "2014-05-22T12:27:39Z",
        "build_snapshot" : false,
        "lucene_version" : "4.7"
      },
      "tagline" : "You Know, for Search"
    }

Now lets route the requests to Elasticsearch server through the domain we have setup in the previous section. The next task is to make nginx capture all the requests to the domain es.domain.com and route it to localhost:9200 and send back a response.

To accomplish that, we need to create a file /etc/nginx/sites-available/elasticsearch with the following content.

    server {
        listen 80;
        server_name es.domain.com;
        location / {
            rewrite ^/(.*) /$1 break;
            proxy_ignore_client_abort on;
            proxy_pass http://localhost:9200;
            proxy_redirect http://localhost:9200 http://es.domain.com/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
        }
    }

Now let us try to understand the above configuration. listen 80 instructs nginx to monitor port 80 for incoming requests. server_name es.domain.com tells nginx that for any incoming request at port 80, if the header_field Host is set to es.domain.com then this server will be used to serve the request. rewrite ^/(.*) /$1 break can help rewrite the url if the incoming and the corresponding elasticsearch url are different. However, there it is redundant. proxy_pass tells where the incoming request should be forwarded. proxy_redirect instructs nginx to replace the text http://localhost:9200 in the “Location” and “Refresh” header fields of a proxied server response to http://es.domain.com. Setting the value proxy_redirect default achieves the same purpose as the replacee and replacer can be inferred from proxy_pass and location directives.

In the above config, we have only created the configuration. To enable it, we need to create a symlink for this in /etc/nginx/sites-enabled. Run the following command in terminal

    sudo ln /etc/nginx/sites-available/test /etc/nginx/sites-enabled/

Now we need to reload the nginx configuration for the new site to take effect.

    sudo service nginx reload

Test Nginx forwards the requests properly

Run the following command in terminal to check if nginx routes the requests properly to Elasticserver.

    curl http://es.domain.com/

should return something similar to

    {
      "status" : 200,
      "name" : "Richard Parker",
      "version" : {
        "number" : "1.1.2",
        "build_hash" : "e511f7b28b77c4d99175905fac65bffbf4c80cf7",
        "build_timestamp" : "2014-05-22T12:27:39Z",
        "build_snapshot" : false,
        "lucene_version" : "4.7"
      },
      "tagline" : "You Know, for Search"
    }

Adding Basic HTTP Authentication

To setup basic HTTP authentication, we need to create a password file. The easiest way to do it is through apache-utils. We need to install it.

    sudo apt-get install apache2-utils

Now lets create a password file with the command htpasswd. By default htpasswd encrypts the password using MD5 encryption. We can also do bcrypt encryption(-B), SHA encryption(-s) or plaintext(-p) using the switches specified in the brackets. Please check the man pages for more details (man htpasswd).

    sudo htpasswd -c /etc/elasticsearch/user.pwd username

htpasswd will prompt you for a password.

    New password: 
    Re-type new password: 
    Adding password for user username

Now a file /etc/elasticsearch/user.pwd will be created with the username and password specified in the following format.

    login:password

If the password is encrypted, the encrypted password will be prefixed with a $code$ where the code identifies the encryption used.

Now we need to add this to our nginx's es.domain.com configuration. We will add the following lines to /etc/nginx/sites-available/elasticsearch.

    auth_basic "Elasticsearch Authentication";
    auth_basic_user_file /etc/elasticsearch/user.pwd;

auth_basic is the message that the browser displays when prompting for an username & password. auth_basic_user_file is the path to the password file.

The file /etc/nginx/sites-available/elasticsearch should look like this.

    server {
        listen 80;
        server_name es.domain.com;
        location / {
            rewrite ^/(.*) /$1 break;
            proxy_ignore_client_abort on;
            proxy_pass http://localhost:9200;
            proxy_redirect http://localhost:9200 https://es.domain.com/;
            proxy_set_header  X-Real-IP  $remote_addr;
            proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header  Host $http_host;
            auth_basic "Elasticsearch Authentication";
            auth_basic_user_file /etc/elasticsearch/user.pwd;
    }
    }

Now lets reload nginx for the added configuration to take effect.

    sudo service nginx reload

Testing HTTP authentication

Trying to access Elasticsearch without authentication should cause an error.

    curl http://es.domain.com

should result in

    <html>
    <head><title>401 Authorization Required</title></head>
    <body bgcolor="white">
    <center><h1>401 Authorization Required</h1></center>
    <hr><center>nginx/1.4.1 (Ubuntu)</center>
    </body>
    </html>

Now let us try the same command with authentication. It should succeed this time.


curl -u username http://es.domain.com

will prompt you for the password.

    Enter host password for user 'username':

Following the correct password, you should get the status message

    {
      "status" : 200,
      "name" : "Steel Spider",
      "version" : {
        "number" : "1.2.1",
        "build_hash" : "6c95b759f9e7ef0f8e17f77d850da43ce8a4b364",
        "build_timestamp" : "2014-06-03T15:02:52Z",
        "build_snapshot" : false,
        "lucene_version" : "4.8"
      },
      "tagline" : "You Know, for Search"
    }

###Setting up HTTPS connection
The problem with basic HTTP authentication is that the password is sent out in plaintext. Even if we use the digest authentication, there is the opportunity for man-in-the-middle attacks. So a better option is to make an HTTPS connection to ensure no eavesdropping. For that we need an SSL Certificate. Since I the whole setup was not meant for any external entity I chose to use a self-signed SSL certificate. You could also get a SSL certificate signed by Certificate Authorities.

Generating the certificate

Let us store the certificates in elasticsearch's config folder.

    sudo mkdir /etc/elasticsearch/ssl
    cd /etc/elasticsearch/ssl

Now let us generate the private server key.

    sudo openssl genrsa -des3 -out es_domain.key 1024

The tool will prompt you for a passphrase. This is important as we will use it again later.

    Generating RSA private key, 1024 bit long modulus
    ............++++++
    .....++++++
    e is 65537 (0x10001)
    Enter pass phrase for es_domain.key:
    Verifying - Enter pass phrase for es_domain.key:

Next, we need to generate a Certificate Signing Request(CSR) using the key we already generated.

    sudo openssl req -new -key es_domain.key -out es_domain.csr

This will prompt for the passphrase. We need to enter the same passphrase we entered before. The most important field is the Common Name where we need to provide the domain for which this will be used. The Challenge Password and Company Name can be left blank.


Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:State
Locality Name (eg, city) []:City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Company
Organizational Unit Name (eg, section) []:Data
Common Name (e.g. server FQDN or YOUR name) []:es.domain.com
Email Address []:addr@server.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
 An optional company name []:

Now we need to remove the passphrase from the key. Else, everytime nginx starts we would have to manually enter the passphrase before nginx can work.

    sudo cp es_domain.key es_domain.key.bk
    sudo openssl rsa -in es_domain.key.bk -out es_domain.key

Now we are finally ready to generate the certificate.

    sudo openssl x509 -req -days 3650 -in es_domain.csr -signkey es_domain.key -out es_domain.crt

Here the days parameter specifies for how many days the certificate is valid. We have set it to 10 years :)

Now we need to edit our virtual host file /etc/nginx/sites-available/elasticsearch to display the certificate and accept HTTPS connections.

We need to change the port from 80 to 443.

    listen 443;

We need to turn on ssl and add the ssl certificates.

    ssl on;
    ssl_certificate /etc/elasticsearch/ssl/es_domain.crt;
    ssl_certificate_key /etc/elasticsearch/ssl/es_domain.key;

Let us also add a new virtual host configuration to ensure all HTTP traffic to our system is redirected to HTTPS.

    server{
        listen 80;
        server_name es.domain.com;
        return 301 https://$host$request_uri;
    }

It is also better to designate seperate log files for elasticsearch subdomain access, else, if nginx has other virtual hosts, which will often be the case, the common log files will be flooded with elasticsearch access requests.
First lets create the log directory.

    sudo mkdir -p /var/log/nginx/elasticsearch/
    sudo chown www-data:www-data /var/log/nginx/elasticsearch/

Lets add the following lines to the server config.

    access_log /var/log/nginx/elasticsearch/access.log;
    error_log /var/log/nginx/elasticsearch/error.log debug;

Your /etc/nginx/sites-available/elasticsearch should look like this

    server {
        listen 443;
        server_name es.domain.com;
        ssl on;
        ssl_certificate /etc/elasticsearch/ssl/es_domain.crt;
        ssl_certificate_key /etc/elasticsearch/ssl/es_domain.key;
        access_log /var/log/nginx/elasticsearch/access.log;
        error_log /var/log/nginx/elasticsearch/error.log debug;
    location / {
        rewrite ^/(.*) /$1 break;
        proxy_ignore_client_abort on;
        proxy_pass http://localhost:9200;
        proxy_redirect http://localhost:9200 https://es.domain.com/;
        proxy_set_header  X-Real-IP  $remote_addr;
        proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header  Host $http_host;
        auth_basic "Elasticsearch Authentication";
        auth_basic_user_file /etc/elasticsearch/user.pwd;
}
}
server {
    listen 80;
    server_name es.domain.com;
    return 301 https://$host$request_uri;
}

Testing HTTPS Connection

As this is a self-signed certificate, most browsers/tools will not recognize it. So we have to copy the certificate file to our local machine.

    sudo scp server_user@server.com:/etc/elasticsearch/ssl/es_domain.crt /local/path/to/store/cert

Now let us test it.

    curl --cacert es_domain.crt --tlsv1 -u username https://es.domain.com

Once you enter the password, you will be able to see the familiar status message.

You can also point the browser to the url. The browser will flash a warning about unknown certificate. You need to add an exception for the elasticsearch site. Then You will be able to see the status message.

Accessing the Protected Elasticsearch Programmatically

I'm going to show an example of how to access Elasticsearch from python as that is the language I'm currently using.

First we need to install python-elasticsearch and python-requests. It is advisable to set up a virtual environment specific to this project. Please refer to Python Guide for VirtualEnv on how to set it up.

    pip install elasticsearch requests

Now we need to provide the certificate, the domain where elasticsearch is running, the port and flag to indicate ssl should be used for communication. By setting a environment variable, we can enable requests to pick up the certificate to be used for verification.

    export REQUESTS_CA_BUNDLE=/local/path/to/certificate.crt

Now comes the python program elasticsearch_test.py with connection class et all. There are four different connection classes but RequestsHttpConnection provides the easiest way to specify the certificate to use for verification.

    from elasticsearch import Elasticsearch as ES, RequestsHttpConnection as RC 
    host_params = {'host':'es.domain.com', 'port':443, 'use_ssl':True}
    es = ES([host_params], connection_class=RC, http_auth=('username', 'password'),  use_ssl=True)
    post = {
                "title": "My Document",
                "body": "Hello world."
            }
    print es.index(
                index="blog",  
                doc_type="post",
                body=post
            )

Running it


python elasticsearch_test.py

produces the output below indicating successful connection.


{u'_type': u'post', u'_id': u'iWRKzIKdTBmrpK-CgAn9mg', u'created': True, u'_version': 1, u'_index': u'blog'}

Congratulations! You have successfully setup a secure elasticsearch server!

Back to Blog