Skip to main content
View SourceRelease Notes

TODO TODO TODO ABOVE HERE NEEDS TO CHECKED/IMPLEMENTED

Security groups

This module attaches a security group to each EC2 Instance that allows inbound requests as follows:

SSH access

You can associate an EC2 Key Pair with each of the EC2 Instances in this cluster by specifying the Key Pair's name in the ssh_key_name variable. If you don't want to associate a Key Pair with these servers, set ssh_key_name to an empty string.

How do you connect to the Kibana cluster?

Using a load balancer

If you deploy the Kibana cluster with a load balancer in front of it see: ELK multi-cluster Example Then you can use the load balancer's DNS along with the kibana_ui_port that you specified in the variables.tf to form a URL like: http://loadbalancer_dns:kibana_ui_port/ For example, your URL will likely look something like: http://kibanaexample-lb-77641507.us-east-1.elb.amazonaws.com:5601/

Using the AWS Console UI

Without a load balancer to act as a single entry point, you will have to manually choose one of the IP addresses from the EC2 Instances that were deployed as part of the Auto Scaling Group. You can find the IP addresses of each EC2 Instance that was deployed as part of the Kibana cluster deployment by locating those instances in the AWS Console's Instance view. Accessing the Kibana UI would require that the IP address you use is either public, or accessible from your local network. The URL would look something like: http://the.ip.address:kibana_ui_port/

How do you roll out updates?

If you want to deploy a new version of Kibana across the cluster, the best way to do that is to:

  1. Rolling deploy:

    1. Build a new AMI.

    2. Set the ami_id parameter to the ID of the new AMI.

    3. Run terraform apply.

    4. Because the kibana-cluster module uses the Gruntwork asg-rolling-deploy module under the hood, running terraform apply will automatically perform a zero-downtime rolling deployment. Specifically, new EC2 Instances will spawned, and only once the new EC2 Instances pass the Load Balancer Health Checks will the existing Instances be terminated.

      Note that there will be a brief period of time during which EC2 Instances based on both the old ami_id and new ami_id will be running. Rolling upgrades docs suggest that this is acceptable for Elasticsearch version 5.6 and greater.

  2. New cluster:

    1. Build a new AMI.
    2. Create a totally new ASG using the kibana-cluster module with the ami_id set to the new AMI, but all other parameters the same as the old cluster.
    3. Wait for all the nodes in the new ASG to start up and pass health checks.
    4. Remove each of the nodes from the old cluster.
    5. Remove the old ASG by removing that kibana-cluster module from your code.

Security

Here are some of the main security considerations to keep in mind when using this module:

  1. Encryption in transit
  2. Encryption at rest
  3. Dedicated instances
  4. Security groups
  5. SSH access

Encryption in transit

Kibana can encrypt all of its network traffic. TODO: Should we recommend using X-Pack (official solution, but paid), an Nginx Reverse Proxy, a custom Elasticsearch plugin, or something else?

Encryption at rest

EC2 Instance Storage

The EC2 Instances in the cluster store their data in an EC2 Instance Store, which does not have native suport for encryption (unlike EBS Volume Encryption).

TODO: Should we implement encryption at rest uising the technique described at https://aws.amazon.com/blogs/security/how-to-protect-data-at-rest-with-amazon-ec2-instance-store-encryption/?

Elasticsearch Keystore

Some Elasticsearch settings may contain secrets and should be encrypted. You can use the Elasticsearch Keystore for such settings. The elasticsearch.keystore is created automatically upon boot of each node, and is available for use as described in the docs.

Reference

Required

ami_idstringrequired

The ID of the AMI to run in this cluster.

cluster_namestringrequired

The name of the kibana cluster (e.g. kibana-stage). This variable is used to namespace all resources created by this module.

instance_typestringrequired

The type of EC2 Instances to run for each node in the cluster (e.g. t2.micro).

max_sizenumberrequired

The maximum number of nodes to have in the kibana cluster.

min_sizenumberrequired

The minimum number of nodes to have in the kibana cluster.

subnet_idslist(string)required

The subnet IDs into which the EC2 Instances should be deployed.

user_datastringrequired

A User Data script to execute while the server is booting.

vpc_idstringrequired

The ID of the VPC in which to deploy the kibana cluster

Optional

allow_ssh_from_cidr_blockslist(string)optional

A list of IP address ranges in CIDR format from which SSH access will be permitted. Attempts to access SSH from all other IP addresses will be blocked.

[]

The IDs of security groups from which SSH connections will be allowed. If you update this variable, make sure to update num_ssh_security_group_ids too!

[]
allow_ui_from_cidr_blockslist(string)optional

A list of IP address ranges in CIDR format from which access to the UI will be permitted. Attempts to access the UI from all other IP addresses will be blocked.

[]

The IDs of security groups from which access to the UI will be permitted. If you update this variable, make sure to update num_ui_security_group_ids too!

[]

If set to true, associate a public IP address with each EC2 Instance in the cluster.

false
desired_capacitynumberoptional

The desired number of EC2 Instances to run in the ASG initially. Note that auto scaling policies may change this value. If you're using auto scaling policies to dynamically resize the cluster, you should actually leave this value as null.

null
instance_profile_pathstringoptional

Path in which to create the IAM instance profile.

"/"
kibana_ui_portnumberoptional

This is the port that is used to access kibana UI

5601
min_elb_capacitynumberoptional

Wait for this number of EC2 Instances to show up healthy in the load balancer on creation.

0

The number of security group IDs in allow_ssh_from_security_group_ids. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ssh_from_security_group_ids, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482

0

The number of security group IDs in allow_ui_from_security_group_ids. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ui_from_security_group_ids, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482

0
ssh_key_namestringoptional

The name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair.

null
ssh_portnumberoptional

The port used for SSH connections

22
tagslist(object(…))optional

List fo extra tag blocks added to the autoscaling group configuration. Each element in the list is a map containing keys 'key', 'value', and 'propagate_at_launch' mapped to the respective values.

list(object({
key = string
value = string
propagate_at_launch = bool
}))
[]
Example
   default = [
{
key = "foo"
value = "bar"
propagate_at_launch = true
}
]

target_group_arnslist(string)optional

A list of target group ARNs to associate with the Kibana cluster.

[]

A maximum duration that Terraform should wait for the EC2 Instances to be healthy before timing out.

"10m"