TODO TODO TODO ABOVE HERE NEEDS TO CHECKED/IMPLEMENTED
Security groups
This module attaches a security group to each EC2 Instance that allows inbound requests as follows:
SSH: For the SSH port (default: 22), you can use the
allowed_ssh_cidr_blocks
parameter to control the list of\ CIDR blocks that will be allowed access. You can use theallowed_inbound_ssh_security_group_ids
parameter to control the list of source Security Groups that will be allowed access.The ID of the security group is exported as an output variable, which you can use with the kibana-security-group-rules, elasticsearch-security-group-rules, elastalert-security-group-rules, and logstash-security-group-rules modules to open up all the ports necessary for Kibana and the respective Elasticsearch tools.
SSH access
You can associate an EC2 Key Pair with each
of the EC2 Instances in this cluster by specifying the Key Pair's name in the ssh_key_name
variable. If you don't
want to associate a Key Pair with these servers, set ssh_key_name
to an empty string.
How do you connect to the Kibana cluster?
Using a load balancer
If you deploy the Kibana cluster with a load balancer in front of it see: ELK multi-cluster Example
Then you can use the load balancer's DNS along with the kibana_ui_port
that you specified in the variables.tf
to form a URL like: http://loadbalancer_dns:kibana_ui_port/
For example, your URL will likely look something like: http://kibanaexample-lb-77641507.us-east-1.elb.amazonaws.com:5601/
Using the AWS Console UI
Without a load balancer to act as a single entry point, you will have to manually choose one of the IP addresses from the EC2 Instances
that were deployed as part of the Auto Scaling Group. You can find the IP addresses of each EC2 Instance that was deployed as part of the Kibana cluster deployment by locating
those instances in the AWS Console's Instance view. Accessing the Kibana UI would require that
the IP address you use is either public, or accessible from your local network. The URL would look something like: http://the.ip.address:kibana_ui_port/
How do you roll out updates?
If you want to deploy a new version of Kibana across the cluster, the best way to do that is to:
Rolling deploy:
Build a new AMI.
Set the
ami_id
parameter to the ID of the new AMI.Run
terraform apply
.Because the kibana-cluster module uses the Gruntwork asg-rolling-deploy module under the hood, running
terraform apply
will automatically perform a zero-downtime rolling deployment. Specifically, new EC2 Instances will spawned, and only once the new EC2 Instances pass the Load Balancer Health Checks will the existing Instances be terminated.Note that there will be a brief period of time during which EC2 Instances based on both the old
ami_id
and newami_id
will be running. Rolling upgrades docs suggest that this is acceptable for Elasticsearch version 5.6 and greater.
New cluster:
- Build a new AMI.
- Create a totally new ASG using the
kibana-cluster
module with theami_id
set to the new AMI, but all other parameters the same as the old cluster. - Wait for all the nodes in the new ASG to start up and pass health checks.
- Remove each of the nodes from the old cluster.
- Remove the old ASG by removing that
kibana-cluster
module from your code.
Security
Here are some of the main security considerations to keep in mind when using this module:
Encryption in transit
Kibana can encrypt all of its network traffic. TODO: Should we recommend using X-Pack (official solution, but paid), an Nginx Reverse Proxy, a custom Elasticsearch plugin, or something else?
Encryption at rest
EC2 Instance Storage
The EC2 Instances in the cluster store their data in an EC2 Instance Store, which does not have native suport for encryption (unlike EBS Volume Encryption).
TODO: Should we implement encryption at rest uising the technique described at https://aws.amazon.com/blogs/security/how-to-protect-data-at-rest-with-amazon-ec2-instance-store-encryption/?
Elasticsearch Keystore
Some Elasticsearch settings may contain secrets and should be encrypted. You can use the Elasticsearch Keystore for such settings. The
elasticsearch.keystore
is created automatically upon boot of each node, and is available for use as described in the
docs.
Reference
- Inputs
- Outputs
Required
ami_id
stringThe ID of the AMI to run in this cluster.
cluster_name
stringThe name of the kibana cluster (e.g. kibana-stage). This variable is used to namespace all resources created by this module.
instance_type
stringThe type of EC2 Instances to run for each node in the cluster (e.g. t2.micro).
max_size
numberThe maximum number of nodes to have in the kibana cluster.
min_size
numberThe minimum number of nodes to have in the kibana cluster.
subnet_ids
list(string)The subnet IDs into which the EC2 Instances should be deployed.
user_data
stringA User Data script to execute while the server is booting.
vpc_id
stringThe ID of the VPC in which to deploy the kibana cluster
Optional
allow_ssh_from_cidr_blocks
list(string)A list of IP address ranges in CIDR format from which SSH access will be permitted. Attempts to access SSH from all other IP addresses will be blocked.
[]
allow_ssh_from_security_group_ids
list(string)The IDs of security groups from which SSH connections will be allowed. If you update this variable, make sure to update num_ssh_security_group_ids
too!
[]
allow_ui_from_cidr_blocks
list(string)A list of IP address ranges in CIDR format from which access to the UI will be permitted. Attempts to access the UI from all other IP addresses will be blocked.
[]
allow_ui_from_security_group_ids
list(string)The IDs of security groups from which access to the UI will be permitted. If you update this variable, make sure to update num_ui_security_group_ids
too!
[]
If set to true, associate a public IP address with each EC2 Instance in the cluster.
false
desired_capacity
numberThe desired number of EC2 Instances to run in the ASG initially. Note that auto scaling policies may change this value. If you're using auto scaling policies to dynamically resize the cluster, you should actually leave this value as null.
null
instance_profile_path
stringPath in which to create the IAM instance profile.
"/"
kibana_ui_port
numberThis is the port that is used to access kibana UI
5601
min_elb_capacity
numberWait for this number of EC2 Instances to show up healthy in the load balancer on creation.
0
The number of security group IDs in allow_ssh_from_security_group_ids
. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ssh_from_security_group_ids
, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482
0
The number of security group IDs in allow_ui_from_security_group_ids
. We should be able to compute this automatically, but due to a Terraform limitation, if there are any dynamic resources in allow_ui_from_security_group_ids
, then we won't be able to: https://github.com/hashicorp/terraform/pull/11482
0
ssh_key_name
stringThe name of an EC2 Key Pair that can be used to SSH to the EC2 Instances in this cluster. Set to an empty string to not associate a Key Pair.
null
ssh_port
numberThe port used for SSH connections
22
tags
list(object(…))List fo extra tag blocks added to the autoscaling group configuration. Each element in the list is a map containing keys 'key', 'value', and 'propagate_at_launch' mapped to the respective values.
list(object({
key = string
value = string
propagate_at_launch = bool
}))
[]
Example
default = [
{
key = "foo"
value = "bar"
propagate_at_launch = true
}
]
target_group_arns
list(string)A list of target group ARNs to associate with the Kibana cluster.
[]
A maximum duration that Terraform should wait for the EC2 Instances to be healthy before timing out.
"10m"