AWS CodeBuild vs GitHub Actions – Pricing Comparison

AWS CodeBuild, being one of the services within the suite of CI solutions by Amazon Web Service (along with CodePipeline, CodeDeploy, and CodeCommit) is a common choice for CI needs among projects that are ultimately deployed to the AWS Cloud.

GitHub on the other hand, started as a Git repository as-a-service, but evolved beyond that since these early days, and is now offering a CI solution of its own – GitHub Actions.

Although offered by different companies – they both satisfy the same requirement – to set up a CI/CD pipeline for your company’s project. Also, they both work for projects that are just getting started or ones with thousands of contributors.

In this article, we’ll try to shed some light on how these two services compare in terms of cost when used on a real-life scale.

Free Tier comparison

AWS CodeBuild includes 100 free build minutes per month for a CodeBuild job, based on the “general1.small” type, which includes 2 vCPUs, 3GB of memory, and 64 GB of disk space, as per the documentation.

GitHub Actions includes 2000 build minutes per month, and the hardware of each job is fixed to 2 vCPUs, 7GB of memory, and 14 GB of disk space, as per the documentation.

Real life example

Let’s assume developers are committing to your project’s repository, on average, 20 times per day, triggering a CI run, and each build takes, on average, 15 minutes to complete. This brings us to 9000 build minutes per month (20x15x30).

Comparing the CI costs for running the same CI pipeline on GitHub Actions vs AWS CodeBuild, the difference, from a cost perspective, would be:

AWS CodeBuild:

  • First 100 minutes from the free tier are consumed
  • The remaining 8900 minutes are charged at $0.005 per minute
  • Total cost: $44.5 per month

GitHub Actions:

  • First 2000 minutes from the free tier are consumed
  • The remaining 7000 minutes will cost you $0,008 per minute
  • Total cost: $56

Conclusion

Of course, the decision whether or not you should use CodeBuild or GitHub Actions is rarely based purely on price. There are other factors that come into play, including flexibility, integration with other services your project uses (e.g. other AWS services), observability (how easy it is for your developers to inspect build logs), scalability, parallelism, etc.

For this reason, I can not highlight one service as being superior to the other. We, at ScavaSoft, use both AWS CodeBuild and GitHub actions for our internal projects. We use GitHub actions mostly for lining and unit testing operations, whereas for Continuous Deployment to production environments – we rely on AWS CodeBuild and CodePipeline’s tight integration with the other AWS services like CloudFormation.

Clutch Crowns ScavaSoft as a Top Web Development Company in Bulgaria

ScavaSoft’s goal is to deliver high quality software solutions to our clients at a reasonable price and with a quick turnaround. We are able to deliver great services thanks to our amazing team which has a lot of experience and expertise in the industry. The team follows the Agile methodology to make sure that we deliver flexible and transparent development processes.

As we continue to provide these amazing services in the market, a B2B ratings and reviews platform has taken notice and decided to give us an amazing award. We would like to thank Clutch for recognizing our web development efforts and for choosing us as one of the top companies in Bulgaria.

Here is Plamen Petkov our CEO for a brief message of acknowledgement:

“We are really proud to have been chosen as one of the leading web developers in Bulgaria. We would like to thank Clutch for giving us this opportunity and recognition. To everyone who supported us and believed in us, this award is for all of you!”

For those who are wondering what Clutch is, well, they are an established platform in the heart of Washington, DC, committed to helping small, mid-market, and enterprise businesses identify and connect with the service providers they need to achieve their goals. 

Again, we would like to extend our gratitude to everyone who helped us reach this amazing milestone. To our partners and clients, it goes without saying, but we couldn’t have done it without your help.

Looking for a capable web development company to handle your needs? Look no further, ScavaSoft is here to help. Drop us a line to learn more about the amazing services we offer.

Provisioning an AWS ECS cluster using Terraform

Lessons learned while automating the infrastructure provisoning of an ECS sluster of EC2 virtual machines, that run Docker and scale with your apps – using Terraform as the infrastructure orchestration tool.

What is AWS, Docker, ECS, Terraform?

Amazon Web Services is the obvious choice when it comes to deploying apps on the cloud.

Its staggering 47% market share speaks for itself.

Docker, a containerization tool, has also been around for a while and the DevOps community has seen its potential, responding with rapid adoption and community support.

Amazon also saw this potential, and created an innovative solution for deploying and managing a fleet of virtual machines – AWS ECS. Under the hood, ECS utilizes AWSs’ well-known concept of EC2 virtual machines, as well as CloudWatch for monitoring them, auto scaling groups (for provisioning and deprovisioning machines depending on the current load of the cluster), and most importantly – Docker as a containerization engine.

Terraform is an infrastructure orchestration tool (also known as “infrastructure as code”). Using Terraform, you declare every single piece of your infrastructure once, in static files, allowing you to deploy and destroy cloud infrastructure easily (through a single command), make incremental changes to the infrastructure, do rollbacks, infrastructure versioning, etc.

The goal of this article is to teach you how to create the Terraform “recipes” for defining an AWS ECS cluster in terms of Terraform code, so that you can deploy or redeploy this cluster in a repeatable, predictable, scalable and error-free way. A brief description is also available for all steps involved.

Preparation

We will not go into much details of how to download Terraform or how to run it locally because such information is readily available on their website.

What you will need though, is an AWS user with root privileges, access key and secret key of that user (generated from the AWS IAM console). Configuring the AWS provider of Terraform is also relatively easy and is also out of scope for this document (also a requirement).

Terraform structure

ecs-cluster.tf

We’ll start by creating the AWS ECS cluster, which is the most basic building block of the AWS ECS service. It has no dependencies (e.g. it doesn’t need a VPC), so we just give it a name that comes from a Terraform variable that we’ll pass during the creation of the infrastructure. This parameterizing allows us to easily create multiple ECS clusters (and its satellite resources) using the same set of Terraform files, if needed.

resource "aws_ecs_cluster" "ecs_cluster" {
  name = var.cluster_name
}

vpc.tf

For this tutorial, we’ll assume you want to create a brand new AWS VPC within the current region. This VPC will contain the EC2 instances that are launched within the ECS cluster and will allow them to communicate securely and privately, without resorting to the public internet and public IPs (as a matter of fact, the EC2 instances could be entirely hidden from the internet). Let’s create a new VPC now:

module "vpc" {
  source = "terraform-aws-modules/vpc/aws"

  name = "VPC of cluster ${var.cluster_name}"
  cidr = "10.0.0.0/16"

  azs = [
    data.aws_availability_zones.available.names[0],
    data.aws_availability_zones.available.names[1],
    data.aws_availability_zones.available.names[2]
  ]
  private_subnets = [
    "10.0.1.0/24",
    "10.0.2.0/24",
  "10.0.3.0/24"]
  public_subnets = [
    "10.0.101.0/24",
    "10.0.102.0/24",
  "10.0.103.0/24"]

  # If you want the EC2 instances to live within the private subnets but still be able to communicate with the public internet
  enable_nat_gateway = true
  # Save some money but less resilient to AZ failures
  single_nat_gateway = true
}

We’ll refer to the above new VPC and its private/public subnets in other resources below.

data.tf

You may have noticed that the above VPC resource has a hardcoded (the subnet ranges) and dynamic part (the three AZs or Availability Zones). The dynamic part is useful, because it allows you to redeploy the ECS to different AWS regions without having to worry about changing hardcoded values every time you do it. Another advantage of this dynamics is that your deployments will adapt seamlessly to future changes to AZs by Amazon (should they decide to bring down an entire Availability Zone or include new ones). The data block below helps you to “retrieve” the up to date AZs from the current AWS region during deployment. This data source is used during the VPC creation above.

data "aws_availability_zones" "available" {
  state = "available"
}

Another data source we will need later is one that will help us get th emost up-to-date AWS EC2 AMI that is ECS optimized. The AMI is nothing more than a codename (e.g. “ami-1234567”) that identifies a template that you can use to jump start a brand new EC2. There are AMIs for the popular Linux distributions: Ubuntu, Debian, etc. The one we will retrieve below, is a Linux based AMI that is created and maintained by Amazon and includes the essential tools for an EC2 to be able to work as an ECS instance (Docker, Git, the ECS agent, SSH).

data "aws_ami" "ecs" {
  most_recent = true # get the latest version

  filter {
    name = "name"
    values = [
      "amzn2-ami-ecs-*"] # ECS optimized image
  }

  filter {
    name = "virtualization-type"
    values = [
      "hvm"]
  }

  owners = [
    "amazon" # Only official images
  ]
}

autoscaling_groups.tf

Now comes the interesting part. Using AWS autoscaling groups, we could automate the launch of EC2 instances when the load of the ECS cluster reaches a certain metric (e.g. the cluster has 70%+ of its RAM utilized).

First we create an autoscaling group that defines the minimum the maximum and the desired EC2 instances count. These parameters help us define minumum available resources under scenarios with small server load, while still keeping costs under control within periods of high load and unexpected spikes of traffic.

resource "aws_autoscaling_group" "ecs_cluster_spot" {
  name_prefix = "${var.cluster_name}_asg_spot_"
  termination_policies = [
     "OldestInstance" # When a “scale down” event occurs, which instances to kill first?
  ]
  default_cooldown          = 30
  health_check_grace_period = 30
  max_size                  = var.max_spot_instances
  min_size                  = var.min_spot_instances
  desired_capacity          = var.min_spot_instances

  # Use this launch configuration to define “how” the EC2 instances are to be launched
  launch_configuration      = aws_launch_configuration.ecs_config_launch_config_spot.name

  lifecycle {
    create_before_destroy = true
  }

  # Refer to vpc.tf for more information
  # You could use the private subnets here instead,
  # if you want the EC2 instances to be hidden from the internet
  vpc_zone_identifier = module.vpc.public_subnets

  tags = [
    {
      key                 = "Name"
      value               = var.cluster_name,

      # Make sure EC2 instances are tagged with this tag as well
      propagate_at_launch = true
    }
  ]
}

# Attach an autoscaling policy to the spot cluster to target 70% MemoryReservation on the ECS cluster.
resource "aws_autoscaling_policy" "ecs_cluster_scale_policy" {
  name = "${var.cluster_name}_ecs_cluster_spot_scale_policy"
  policy_type = "TargetTrackingScaling"
  adjustment_type = "ChangeInCapacity"
  lifecycle {
    ignore_changes = [
      adjustment_type
    ]
  }
  autoscaling_group_name = aws_autoscaling_group.ecs_cluster_spot.name

  target_tracking_configuration {
    customized_metric_specification {
      metric_dimension {
        name = "ClusterName"
        value = var.cluster_name
      }
      metric_name = "MemoryReservation"
      namespace = "AWS/ECS"
      statistic = "Average"
    }
    target_value = 70.0
  }
}

The above basically automates the following:

  • Whenever the cluster has less tha 70% of memory used, the autoscaling policy will make sure there are “var.max_spot_instances” number of instances running
  • As soon as the cluster hits 70% memory used or more, the autoscaling policy will kick in and create new EC2 instances inside the cluster (using the “aws_launch_configuration.ecs_config_launch_config_spot.name” launch configuration) and keep duing that every 30 seconds (default_cooldown=30) until the criteria is no longer satisfied (cluster has less than 70% of memory used).

launch_configuration.tf

We saw above the automation of the scaling of the cluster, but we still haven’t defined what type of EC2 instances will be launched when scaling occurs. E.g. what AMI will they use, will they be larger ones (cost more per month) or smaller ones (cost less per month), will they assume an AIM role (to be able to access other AWS resources on your behalf), etc.

The launch configuration defines all of these parameters:

resource "aws_launch_configuration" "ecs_config_launch_config_spot" {
  name_prefix                 = "${var.cluster_name}_ecs_cluster_spot"
  image_id                    = data.aws_ami.ecs.id # Use the latest ECS optimized AMI
  instance_type               = var.instance_type_spot # e.g. t3a.medium

  # e.g. “0.013”, which represents how much you are willing to pay (per hour) most for every instance
  # See the EC2 Spot Pricing page for more information:
  # https://aws.amazon.com/ec2/spot/pricing/
  spot_price                  = var.spot_bid_price

  enable_monitoring           = true
  associate_public_ip_address = true
  lifecycle {
    create_before_destroy = true
  }

  # This user data represents a collection of “scripts” that will be executed the first time the machine starts.
  # This specific example makes sure the EC2 instance is automatically attached to the ECS cluster that we create earlier
  # and marks the instance as purchased through the Spot pricing
  user_data = <<EOF
#!/bin/bash
echo ECS_CLUSTER=${var.cluster_name} >> /etc/ecs/ecs.config
echo ECS_INSTANCE_ATTRIBUTES={\"purchase-option\":\"spot\"} >> /etc/ecs/ecs.config
EOF

  # We’ll see security groups later
  security_groups = [
    aws_security_group.sg_for_ec2_instances.id
  ]

  # If you want to SSH into the instance and manage it directly:
  # 1. Make sure this key exists in the AWS EC2 dashboard
  # 2. Make sure your local SSH agent has it loaded
  # 3. Make sure the EC2 instances are launched within a public subnet (are accessible from the internet)
  key_name             = var.ssh_key_name

  # Allow the EC2 instances to access AWS resources on your behalf, using this instance profile and the permissions defined there
  iam_instance_profile = aws_iam_instance_profile.ec2_iam_instance_profile.arn
}

security_groups.tf

AWS is big on security and almost every resource that you create is locked down for outside access by default. The same goes with EC2 instances. If you want these instances to receive any internet traffic (e.g. have an HTTP server installed) or if you want to SSH into the machines from your computer through the public internet, you need to make sure the Security Group, attached to the EC2 instances allows all this:

# Allow EC2 instances to receive HTTP/HTTPS/SSH traffic IN and any traffic OUT
resource "aws_security_group" "sg_for_ec2_instances" {
  name_prefix = "${var.cluster_name}_sg_for_ec2_instances_"
  description = "Security group for EC2 instances within the cluster"
  vpc_id      = data.aws_vpc.main.id
  lifecycle {
    create_before_destroy = true
  }
  tags = {
    Name = var.cluster_name
  }
}

resource "aws_security_group_rule" "allow_ssh" {
  type      = "ingress"
  from_port = 22
  to_port   = 22
  protocol  = "tcp"
  cidr_blocks = [
    "0.0.0.0/0"
  ]
  security_group_id = aws_security_group.sg_for_ec2_instances.id
}
resource "aws_security_group_rule" "allow_http_in" {
  from_port         = 80
  protocol          = "tcp"
  security_group_id = aws_security_group.sg_for_ec2_instances.id
  to_port           = 80
  cidr_blocks = [
    "0.0.0.0/0"
  ]
  type = "ingress"
}

resource "aws_security_group_rule" "allow_https_in" {
  protocol  = "tcp"
  from_port = 443
  to_port   = 443
  cidr_blocks = [
    "0.0.0.0/0"
  ]
  security_group_id = aws_security_group.sg_for_ec2_instances.id
  type              = "ingress"
}
resource "aws_security_group_rule" "allow_egress_all" {
  security_group_id = aws_security_group.sg_for_ec2_instances.id
  type              = "egress"
  from_port         = 0
  to_port           = 0
  protocol          = "-1"
  cidr_blocks = [
  "0.0.0.0/0"]
}

Of course, if you plan on launching something fancy like MySQL within the EC2 instances, you may want to expose other port ranges as well (e.g. 3306 for a MySQL server). Feel free to play around and add new security group rules to the above security group as needed.

variables.tf

This file defines all the variables that you will pass in while creating the infrastructure:

variable "cluster_name" {
  description = "The name to use to create the cluster and the resources. Only alphanumeric characters and dash allowed (e.g. 'my-cluster')"
}
variable "ssh_key_name" {
  description = "SSH key to use to enter and manage the EC2 instances within the cluster. Optional"
  default     = ""
}
variable "instance_type_spot" {
  default = "t3a.medium"
}
variable "spot_bid_price" {
  default = "0.0113"
 description “How much you are willing to pay as an hourly rate for an EC2 instance, in USD”
}
variable "min_spot_instances" {
  default     = "1"
  description = "The minimum EC2 spot instances to have available within the cluster when the cluster receives less traffic"
}
variable "max_spot_instances" {
  default     = "5"
  description = "The maximum EC2 spot instances that can be launched during period of high traffic"
}

Still having trouble?

We are available for Terraform consulting. Get in touch.


Conclusion

We’be published all of the above as a Terraform module on GitHub, that you could easily inject into your project.

Now that you are ready to launch your AWS ECS cluster using Terraform, what will you build next? Let us know in the comments.

The Benefits of Outsourcing Software Development

A lot has been written about software development outsourcing and the myriad of opportunities and changes that it has brought to the global economy. What we have witnessed in the last 15 years has proved one thing for sure – outsourcing is here to stay. So let’s recap why you should take advantage of software outsourcing and how to get started.

It takes money, time and not to mention a large pool of talent to complete the development of applications in-house and in situations where businesses lack one or more of these key elements, companies turn to software outsourcing to get the job done. Simply put, software development outsourcing requires an arrangement between a business and a contractor that passes on the software development work on to the 3rd party expertise.

The outsourcing of software is a global trend that turned into the new normal in business operations. Even more, it is somewhat considered the holy grail for growing competitive advantage and increasingly becomes integral part of business strategies across industries. The reasons are clear: it allows companies to achieve greater economies of scale (proportionate cost saving caused by increased level on production) and even more importantly – focus on their core competencies without spending excessive amounts of time or money. Hence, all of these aspects boil down to this simple equation:

Software outsourcing = Cost effectiveness + Better Product (enhanced Customer Experience) + Timely Delivery

It then comes as no surprise that the industry stats paint an identical picture. A report by the Information Services Group that focused only on the contracts over $25 million determines the annual global revenue from IT outsourcing lays between $60 and $70 billion. The latest GSA report estimates that 70% of the surveyed companies are going to rely on outsourcing even more in the next couple of years while 35% of them will do so significantly.

So let’s get a better look into the benefits that software outsourcing provides and how you can leverage them to grow your business.

Team working on desk, fist bumping

Broader access to talent and technology

For most companies, it is simply not feasible to have an in-house expert in every technology available in the digital ocean.  Indeed, one of the main reasons for outsourcing software development is to get the best talent on board, irrelevant of geographic locations and business hours.

On a broader scale, outsourcing also takes a big chunk of the stress away: you needn’t worry about every little detail when you got experienced professionals taking care of it backstage. You can save the trainings and guidance for the people in your organization that perform the core capabilities of the company. You don’t need to expense the resources and time to manage the project while simultaneously enjoying shorter development time (and you guessed it: quicker time to market).

Increased Focus on Core Business

Marketing 101: Differentiation is key for success (meaning profitability). We have never seen markets so saturated in (almost) every industry as they are today. Therefore, now more than ever before, we need to find a way to innovate and outrun the competition with better offerings. Outsourcing software development strengthens the focus on improving core processes as it allows your in-house team to concentrate on the strategic goals that grow your business rather than feeling overwhelmed with work outside their scope of expertise.

Cost Effectiveness

On the one hand, outsourcing links to cost savings from salaries and employee benefits due to smaller in-house team sizes (after all you won’t need a specialist for every technology used throughout the project). On the other hand, depending on the type of outsourcing you select (onshore, nearshore, offshore), additional costs efficiency is unveiled by the significant wage gap in developed and developing countries.

Improved Risk Management

Finance 101: Don’t put all your eggs in one basket. We are so used to the idea of diversification of financial assets to ensure the long term viability of the company that we rarely think twice about it. The same rule applies to software development projects and just like with portfolios, it takes a well prepared strategy to reap the benefits.

A great way to mitigate risk is to split the project into components and assign those to different vendors. Of course, due diligence is needed: check references, look at their portfolio, discuss requirements and then make your decision.

Enhanced Security

If your in-house team does not comprise of IT experts, there is a high possibility that your software’s security is not in good hands. Security issues mean, among many other things, that sensitive information about the company can be leaked.

Software development outsourcing covers the basis on the security front as the programmers’ job is to make sure that the code and processes through which it is build are up to the industry’s highest standards.

Spend less on Support

Software needs continuous attendance in the form of maintenance and support. If you are to do this in-house, you need to set up a team to look after the ongoing modifications and bugs. Outsourcing allows you to free up resources as the vendor will look after this on your behalf. Also, with offshore outsourcing the time difference may play in your favour by allowing for 24 hours business operations.

Fast-Forward time to market

We all know plenty about the first mover advantage. To get it, however, is not an easy task. Relying solely on your in-house team presents a significant obstacle when time is a crucial factor. Outsourcing offers a great opportunity to lay out the timeline of the project and receive dedicated engineers at your service that make it happen.

How to get on the outsourcing train (next stop Business Growth)

As with any other business decision it all starts within. Analyze your needs, the benefits and the strategic advantage that the application in question will bring to the business. After initial audit is done, look for an outsourcing partner. And here the word partner is key. You are looking for someone that can bring to the table experience, expertise and motivation that will support a strong and open (not to mention mutually beneficial) relationship.

Shortlist potential candidates and share your requirements. Discuss your project and get a sense of what their working style is on these first meetings. Don’t forget to do your due diligence and if needed compile additional list of questions for potential partners to answer.

Leverage the flexibility, cost efficiency and competitive advantage that software outsourcing provides. This is a great way to introduce digital transformation and growth while minimizing risk and optimizing profits.

Want to talk more about outsourcing a project? Contact us for more information.