<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2026-04-22T21:40:28+00:00</updated><id>/feed.xml</id><title type="html">Dylan Redmond</title><subtitle>Personal Blog used as sample website to share files and tech posts.</subtitle><entry><title type="html">Multi Region Secrets KMS Keys Silent Error</title><link href="/2026/04/22/mrk.html" rel="alternate" type="text/html" title="Multi Region Secrets KMS Keys Silent Error" /><published>2026-04-22T20:00:30+00:00</published><updated>2026-04-22T20:00:30+00:00</updated><id>/2026/04/22/mrk</id><content type="html" xml:base="/2026/04/22/mrk.html"><![CDATA[<h1 id="multi-region-keys-silent-error">multi region keys silent error</h1>

<h2 id="preface">Preface</h2>

<p>Been working on multi region deployments a bit in 2026. As a result of this I’ve been working with replicating AWS Secrets Manager secrets across regions.</p>

<p>You can find more on how multi-region keys work <a href="https://docs.aws.amazon.com/kms/latest/developerguide/mrk-how-it-works.html">here</a>. Essentially in backup and recovery architecture, they allow you to process encrypted data without interruption even during a region outage.</p>

<p>In basic terms, data maintained in the backup region can be decrypted in the backup region <strong>and</strong> data newly encrypted in the backup region can be descrypted in the primary region once that region is restored.</p>

<p>So Region A fails, you flip traffic to Region B and once Region A is restored the data encrypted during downtime in the backup region can still be descrypted by the original region.</p>

<p>Now, having worked with AWS for years now the times when something doesn’t work as expected are few and far between. It can be easy to think something isn’t working as expected <strong>for you</strong> and quickly think; Ah this must be a bug. However, you think a little bit more and also of the number of users and scale AWS has and usually it’s not a bug and you realise you may have been doing something not as it was intended to be done.</p>

<p>Having said that, I believe I’ve discovered some unexpected behaviour. At the time of writing, I most likely haven’t been the first to discover it but I’ll explain below.</p>

<h2 id="secret-replication">Secret Replication</h2>

<p>Secret replication was launched in <a href="https://aws.amazon.com/about-aws/whats-new/2021/03/aws-secrets-manager-provides-support-to-replicate-secrets-in-aws-secrets-manager-to-multiple-aws-regions/">2021</a> it allows you to create regional read replicas for secrets, managed by AWS Secrets Manager. Allowing you to access secrets in multiple regions, similar to how I described above with AWS.</p>

<p>I must preface here, that from my experience with AWS outages, the main services that cause revenue impact are usually CDN or compute based, think EC2, ECS, EKS &amp; Lambda. A scenario is where ECS is impacted in us-east-1 and new tasks cannot be launched due to API errors, it’s likely that the Secrets Manager APIs in the same region would be <strong>unaffected</strong> at that time. Although if you’re going to implement multi-region and disaster recovery you will want to cover all bases. As the last thing you want is your region switchover to fail due to Secrets Manager API errors across regions during an outage. Replicating secrets allows this issue to be combatted.</p>

<p><img src="/images/mrk/image.png" alt="" /></p>

<h2 id="how-to-actually-do-it">How to actually do it</h2>

<p>For simplicity, I’ve used my beloved CloudFormation to create a simple Secret in us-east-1 using the following template (note there’s no region replication):</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Resources:
  Secret:
    Type: AWS::SecretsManager::Secret
    Properties:
      Name: MultiRegionTest
      Description: Test MultiRegionSecretWithMrk
      SecretString: TestString
</code></pre></div></div>

<p>After that goes create complete, I’ve done an update to add <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/TemplateReference/aws-properties-secretsmanager-secret-replicaregion.html">ReplicaRegions</a>. The only issue is that I don’t have any multi region KMS keys in my Account. I’ve taken <code class="language-plaintext highlighter-rouge">mrk-1234abcd12ab34cd56ef12345678990ab</code> as the multi-region key from the AWS docs <a href="https://docs.aws.amazon.com/kms/latest/developerguide/mrk-how-it-works.html">here</a></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Resources:
  Secret:
    Type: AWS::SecretsManager::Secret
    Properties:
      Name: MultiRegionTest
      Description: Test MultiRegionSecretWithMrk
      SecretString: TestString
      ReplicaRegions:
        - Region: us-east-2
          KmsKeyId: mrk-1234abcd12ab34cd56ef12345678990ab
</code></pre></div></div>

<p>The surprising result is that the CloudFormation stack Update completes without any issues:</p>

<p><img src="/images/mrk/image-1.png" alt="" /></p>

<p>And upon checking the Secrets Manager Console and using the CLI you’ll see InProgress set on the Replication:</p>

<p><img src="/images/mrk/image-2.png" alt="" /></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws secretsmanager describe-secret --secret-id MultiRegionTest
[...]
    "PrimaryRegion": "us-east-1",
    "ReplicationStatus": [
        {
            "Region": "us-east-2",
            "KmsKeyId": "mrk-1234abcd12ab34cd56ef12345678990ab",
            "Status": "InProgress"
        }
    ]
}
</code></pre></div></div>

<p>Then after AWS finalises making the <a href="https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_ReplicateSecretToRegions.html">ReplicateSecretToRegions</a> API calls you’ll see a failure:</p>

<p><img src="/images/mrk/image-3.png" alt="" /></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    "ReplicationStatus": [
        {
            "Region": "us-east-2",
            "KmsKeyId": "mrk-1234abcd12ab34cd56ef12345678990ab",
            "Status": "Failed",
            "StatusMessage": "Replication failed: Secrets Manager can't encrypt the secret value: Invalid keyId 'mrk-1234abcd12ab34cd56ef12345678990ab' (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: cb9c4973-1626-4c99-8e57-42f7f886d859; Proxy: null)"
        }
    ]
</code></pre></div></div>

<p>This is not only native to CloudFormation but the same behaviour occurs via any CreateSecret API, for example via CLI:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws secretsmanager create-secret --name TestSecretMrkFailure --add-replica-regions Region=us-west-2,KmsKeyId=mrk-1234abcd12ab34cd56ef12345678990aa
{
    "ARN": "arn:aws:secretsmanager:us-east-1:777864510236:secret:TestSecretMrkFailure-4BS2cc",
    "Name": "TestSecretMrkFailure",
    "ReplicationStatus": [
        {
            "Region": "us-west-2",
            "KmsKeyId": "mrk-1234abcd12ab34cd56ef12345678990aa",
            "Status": "InProgress"
        }
    ]
}
</code></pre></div></div>

<p>What’s even more strange is that when providing a made up MRK for this secret I get a response that Replication succeeded:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws secretsmanager describe-secret --secret-id TestSecretMrkFailure
{
    "ARN": "arn:aws:secretsmanager:us-east-1:777864510236:secret:TestSecretMrkFailure-4BS2cc",
    "Name": "TestSecretMrkFailure",
    "LastChangedDate": "2026-04-22T22:13:45.699000+01:00",
    "CreatedDate": "2026-04-22T22:13:45.687000+01:00",
    "PrimaryRegion": "us-east-1",
    "ReplicationStatus": [
        {
            "Region": "us-west-2",
            "KmsKeyId": "mrk-1234abcd12ab34cd56ef12345678990aa",
            "Status": "InSync",
            "StatusMessage": "Replication succeeded"
        }
    ]
}
</code></pre></div></div>

<p>Upon further testing, upon initial creation the Replication succeeds however the above secret was created with no initial value, when you attempt to update the secret with a Value you will get an error on Replication. I guess that tells us that when <code class="language-plaintext highlighter-rouge">ReplicateSecretToRegions</code> is called with a <code class="language-plaintext highlighter-rouge">SecretId</code> additional actions or validation is done on AWS’ side.</p>

<p>As this is an API side issue it would affect all IaC providers too, Terraform, Pulumi, CDK etc.</p>

<h2 id="conclusion">Conclusion</h2>

<p>I would expect the Create Operation to fail on validation if an invalid mrk is provided but that doesn’t seem to be the case. In fact there looks to be zero validation on the KmsKeyId value at all:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ aws secretsmanager create-secret --name TestSecretMrkFailure4 --add-replica-regions Region=us-west-2,KmsKeyId=MyNameIsDylan$$$$
{
    "ARN": "arn:aws:secretsmanager:us-east-1:777864510236:secret:TestSecretMrkFailure4-6IyW1q",
    "Name": "TestSecretMrkFailure4",
    "ReplicationStatus": [
        {
            "Region": "us-west-2",
            "KmsKeyId": "MyNameIsDylan1610916109",
            "Status": "InProgress"
        }
    ]
}
</code></pre></div></div>
<p>I’ll look at ways to combat against this in future posts…</p>

<p>PS: remember to delete these secrets after use, <a href="https://aws.amazon.com/secrets-manager/pricing/">they’re $0.40 per secret per month. A replica secret is considered a distinct secret and will also be billed at $0.40 per replica per month.</a></p>]]></content><author><name></name></author><category term="aws" /><category term="cloud" /><category term="devops" /><category term="platform" /><category term="secrets" /><category term="kms" /><category term="multi-region" /><summary type="html"><![CDATA[multi region keys silent error]]></summary></entry><entry><title type="html">The Cloud in 2026: Growth and Predictions</title><link href="/2025/12/30/Cloud-in-2026.html" rel="alternate" type="text/html" title="The Cloud in 2026: Growth and Predictions" /><published>2025-12-30T07:00:30+00:00</published><updated>2025-12-30T07:00:30+00:00</updated><id>/2025/12/30/Cloud-in-2026</id><content type="html" xml:base="/2025/12/30/Cloud-in-2026.html"><![CDATA[<h1 id="cloud-computing-in-2026-evolution-and-predictions">Cloud Computing in 2026: Evolution and Predictions</h1>

<h2 id="table-of-contents">Table of Contents</h2>

<ol>
  <li><a href="#preface">Preface</a></li>
  <li><a href="#platform-engineering-at-scale">Platform Engineering at Scale</a></li>
  <li><a href="#ai-in-devops-the-slop-and-the-promise">AI in DevOps: The Slop and the Promise</a></li>
  <li><a href="#finops-cost-vs-business-outcomes">FinOps: Cost vs. Business Outcomes</a></li>
  <li><a href="#cloudflares-growing-market-share">Cloudflare’s Growing Market Share</a></li>
  <li><a href="#getting-back-to-basics-with-books">Getting Back to Basics with Books</a></li>
  <li><a href="#conclusion">Conclusion</a></li>
</ol>

<h2 id="preface">Preface</h2>

<p>As 2025 comes to an end I thought I’d look ahead to 2026 and take some stock on the current state of DevOps/Platform Engineering and some general ramblings on what I think is to come this year and in the coming years.</p>

<p>In some sense a lot has changed since I began working in DevOps in 2018 but at the same time some things stay the same. I think 2025 was the first year that we couldn’t say Kubernetes didn’t exist a decade ago and since then I have not managed to avoid manually scaling clusters and control planes as ETCD ran OOM due to growing pods. I also still haven’t avoided updating clusters in the old fashioned way (maybe EKS Auto Mode will solve that issue in 2026 though).</p>

<p>In this post, I’ll share my thoughts on what changed in the past year and what I expect to see in 2026. I feel with my experience working at different scales, from smaller teams to large enterprises with 000’s of developers I’ve seen a lot of change over the past few years.</p>

<h2 id="platform-engineering-at-scale">Platform Engineering at Scale</h2>

<p>Platform engineering has become a significant focus recently, and for good reason. From what I’m seeing almost all enterprises have adopted or are currently adopting internal develop platforms. Backstage from Spotify is still the defacto as it’s open source however I would not knock the traditional set up of running Jenkins. A backstage IDP does have the feel of a frontend for triggering GitHub Actions and GitLab runners much like was done in the past with frontends for Jenkins. I do think we’ll see growth in the IDP space especially with plugins to come with LLMs and chat bots to provision infrastructure via runners.</p>

<p>That said in smaller companies where a full blown IDP isn’t always necessary. A well-crafted set of CLI tools, scripts, and documentation can achieve similar results without the overhead and it can also scale. A CLI tool with a specific set of instructions to allow devs to provision infrastructure can often have perfect results and be a crowd pleaser among devs without the bloat of dealing with an IDP with maintainance and downtime.</p>

<h2 id="ai-in-devops-the-slop-and-the-promise">AI in DevOps: The Slop and the Promise</h2>

<p>2025 was very much the year AI become mainstream in developers lives and in DevOps. Tools for provisioning infrastructure from natural language are beginning to pop up and some of them seem promising [1].</p>

<p>However, it’s too early to say if these tools can reliably and safely provision production grade infrastructure that is SOX and PCI compliant amongst other things. MCPs for DataDog, Splunk and LLMs in general are at the stage where they can quickly point out a point of failure or issue when troubleshooting an issue. With the increase in lines of code outputted by these LLMs and approved by developers in to the codebase it will be interesting to see if troubleshooting and supporting production incidents is shifted left from SRE/DevOps Engineers and directly in to the hands of development teams. I would see that happening as a major pro for both but I don’t think we’re there yet.</p>

<p>Much like well written sofrware, without appropriate context and well-crafted instructions, AI tools can generate outputs that fail in production environments. I’ve spent considerable time debugging Terraform configurations that seemed sound until they encountered edge cases or security requirements. Like a Postgres instance provisioned with auto major upgrades enabled - LLMs can sometimes favour breaking things over being conservative.</p>

<p>Looking to 2026, I expect this technology to mature significantly. Companies will start more widespread use of gateway LLMs [2]. Rather than having multiple places to use AI tools as is today, we’ll see consolidation to preferred choices that integrate effectively with existing workflows. LLM gateways are a tool for this as they open up wider access to models, track usage and cost.</p>

<h2 id="finops-cost-vs-business-outcomes">FinOps: Cost vs. Business Outcomes</h2>

<p>Cost optimization is a passion of mine and with the spend in AI really should be of more importance than ever. But in practice, it can often be a hinderence to delivery at early stages and be treated as an afterthought. There’s an old adage of optimising for business outcomes vs. optimising for cost and usually the business outcomes come first and optimisation after.</p>

<p>2026 might not be the year that FinOps goes full mainstream but it’s quite a year for Anthropic, OpenAI and the chasing field in AI. Not to mention it’s the second fiscal year of AI on public companies books. One of two things will happen from AI and it’s increased efficiency. There’s no doubt there’s an increase in productivty using the tools but if this doesn’t in turn equate to an increase in revenue (ie. more money being spent by customers) then the other optimisation could be in the form of cost-reduction. Hopefully that is on the Cloud Bill rather than in headcount…</p>

<p>For 2026 though, I expect FinOps to continue to grow and become more integrated into development processes. InfraCost [3] is making this easy with direct code scanning and change management from within the PR. We’ll also see growth from CloudZero and other cost management tools. Rather than separate teams conducting postmortems on expensive deployments, we’ll see cost considerations built into CI/CD pipelines and development workflows. Tools will provide real time cost feedback, allowing developers to make informed decisions without sacrificing speed.</p>

<p>The key will be finding the right balance. With enough cost awareness to prevent waste, but not so much that it hinders innovation.</p>

<h2 id="cloudflares-growing-market-share">Cloudflare’s Growing Market Share</h2>

<p>Cloudflare has been around for a while now and first came to my attention in 2020 when I saw some smaller customers in AWS leaving CloudFront for Cloudflare but back then they didn’t have the product offering for compute that AWS had. This was followed by their R2 offering which offers no egress and cheaper storage costs to S3. I believe they’ll continue to grow and start taking enterprise business from AWS. Their edge network and security offerings have matured significantly, providing compelling alternatives to traditional cloud providers. And in 2026 while I haven’t used their Workers service I think it might be the year I dip my toes - and from an AWS man that is telling.</p>

<p>In 2026, expect to see more enterprises adopting Cloudflare for specific use cases, potentially leading to hybrid architectures where AWS handles core compute while Cloudflare manages edge services and security. So even if their compute runs on ECS or EKS it might be fronted by CloudFlare with some workers handling event driven processing with R2. This is dependent on the cost side of things I mentioned above coming in to play.</p>

<h2 id="getting-back-to-basics-with-books">Getting Back to Basics with Books</h2>

<p>Following tech and DevOps online provides a lot of information, but much of it feels generic and not focused on enterprise scale challenges. I also find my time being spent on Twitter (X) and Reddit is increasing to the point that I don’t like. While any time spent on those sites for me is not disirable they are often the place to get updates on new releases and follow people who publish content. Although in 2026 I’ll be moving this to LinkedIn and Slack communities - mainly the AWS Builder Community Slack and the FinOps Slack. I’ve also recently purchased three O’Reilly books to get back to fundamentals and gain deeper knowledge. I’m looking forward to getting through these:</p>

<p><img src="/assets/PC300272.JPG" alt="Books for learning" /></p>

<p>I’m hoping these books will boost my solid foundation of software engineering and architecture that online content often lacks. In a field moving as fast as cloud computing, it’s easy to get caught up in the latest trends without understanding the underlying principles. These books should help bridge that gap.</p>

<h2 id="conclusion">Conclusion</h2>

<p>So, in my opinion 2026 will continue to be a year of maturation in DevOps and Platform Engineering. IDPs will continue to grow in popularity for enteprises and AI will become more integrated to workflows but at a more refined scale. Likely with a reduction in AI providers being used (Cursor or Claude Code - not both), and cost optimization might not be baked into development processes yet but will continue to pop up. We’ll continue to see Cloudflare rise and I’m looking forward to following any product announcements they have this year. These might challenge traditional cloud providers, and a renewed focus on fundamentals will ensure sustainable growth.</p>

<p>From my experience, the most successful organizations will be those that balance innovation with pragmatism, adopting new technologies while maintaining a strong foundation. Some that go ‘all-in’ on AI will bear fruit, while others just won’t see a payoff and will have to pivot back to focussing on what their products where before AI and see where they can continue to grow there. As we enter 2026, I’m excited to see how these trends play out in the real world and we’ll see if I’m on the money or completely off the mark…</p>

<p>[1] https://spacelift.io/intent
[2] https://llmgateway.io
[3] https://www.infracost.io/</p>]]></content><author><name></name></author><category term="aws" /><category term="cloud" /><category term="devops" /><category term="platform" /><summary type="html"><![CDATA[Cloud Computing in 2026: Evolution and Predictions]]></summary></entry><entry><title type="html">Passing Properties from S3 to Batch via EventBridge</title><link href="/2024/09/02/S3-EventBridge-Batch.html" rel="alternate" type="text/html" title="Passing Properties from S3 to Batch via EventBridge" /><published>2024-09-02T07:00:30+00:00</published><updated>2024-09-02T07:00:30+00:00</updated><id>/2024/09/02/S3-EventBridge-Batch</id><content type="html" xml:base="/2024/09/02/S3-EventBridge-Batch.html"><![CDATA[<h1 id="passing-properties-from-s3-to-batch-via-eventbridge">Passing Properties from S3 to Batch via EventBridge</h1>

<h2 id="table-of-contents">Table of Contents</h2>

<ol>
  <li><a href="#overview-of-the-workflow">Overview of the Workflow</a></li>
  <li><a href="#prerequisites">Prerequisites</a></li>
  <li><a href="#step-by-step-implementation">Step-by-Step Implementation</a>
    <ol>
      <li><a href="#1-s3-bucket-setup">S3 Bucket Setup</a></li>
      <li><a href="#2-create-eventbridge-rule">Create EventBridge Rule</a></li>
      <li><a href="#3-define-the-input-transformer">Define the Input Transformer</a></li>
      <li><a href="#4-create-batch-job-definition">Create Batch Job Definition</a></li>
      <li><a href="#5-finalize-and-deploy">Finalize and Deploy</a></li>
    </ol>
  </li>
  <li><a href="#conclusion">Conclusion</a></li>
</ol>

<p>In AWS, connecting different services seamlessly is a key to building robust, automated workflows. A common scenario is passing properties from an S3 event to an AWS Batch job using EventBridge. This workflow allows you to trigger a Batch job when a specific event happens in an S3 bucket, passing along important metadata or configurations as part of the job’s environment variables.</p>

<p>In this blog post, we’ll explore how to implement this architecture using Infrastructure as Code (IaC) with Terraform. This approach ensures your infrastructure is version-controlled, repeatable, and scalable.</p>

<h2 id="overview-of-the-workflow">Overview of the Workflow</h2>

<p>The basic workflow involves the following steps:</p>

<ol>
  <li><strong>S3 Event</strong>: An object is created or modified in an S3 bucket.</li>
  <li><strong>EventBridge Rule</strong>: The event triggers an EventBridge rule.</li>
  <li><strong>Input Transformer</strong>: The EventBridge rule uses an Input Transformer to extract and transform specific properties from the S3 event.</li>
  <li><strong>AWS Batch</strong>: The transformed properties are passed as environment variables to the AWS Batch job.</li>
</ol>

<h3 id="prerequisites">Prerequisites</h3>

<p>Before we dive into the setup, ensure you have the following:</p>

<ul>
  <li>An S3 bucket set up to store your files.</li>
  <li>An AWS Batch compute environment and job queue configured.</li>
  <li>Basic understanding of Terraform and AWS services like S3, EventBridge, and Batch.</li>
</ul>

<h2 id="step-by-step-implementation">Step-by-Step Implementation</h2>

<h3 id="1-s3-bucket-setup">1. S3 Bucket Setup</h3>

<p>First, you need an S3 bucket where your files will be uploaded. This bucket will emit events that will trigger the AWS Batch jobs.</p>

<div class="language-hcl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">resource</span> <span class="s2">"aws_s3_bucket"</span> <span class="s2">"example_bucket"</span> <span class="p">{</span>
  <span class="nx">bucket</span> <span class="p">=</span> <span class="s2">"my-event-trigger-bucket"</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="2-create-eventbridge-rule">2. Create EventBridge Rule</h3>

<p>Next, create an EventBridge rule that listens for specific events from the S3 bucket. This rule will filter for events like `s3:ObjectCreated:*` and trigger the Batch job.</p>

<div class="language-hcl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">resource</span> <span class="s2">"aws_cloudwatch_event_rule"</span> <span class="s2">"s3_event_rule"</span> <span class="p">{</span>
  <span class="nx">name</span>        <span class="p">=</span> <span class="s2">"s3-event-rule"</span>
  <span class="nx">event_pattern</span> <span class="p">=</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
{
  "source": ["aws.s3"],
  "detail-type": ["AWS API Call via CloudTrail"],
  "detail": {
    "eventName": ["PutObject", "CompleteMultipartUpload"],
    "requestParameters": {
      "bucketName": ["${aws_s3_bucket.example_bucket.bucket}"]
    }
  }
}
</span><span class="no">EOF
</span><span class="p">}</span>
</code></pre></div></div>

<h3 id="3-define-the-input-transformer">3. Define the Input Transformer</h3>

<p>The Input Transformer in EventBridge allows you to manipulate the event data before passing it to the Batch job. For example, you can extract the S3 bucket name and object key from the event and pass them as environment variables.</p>

<div class="language-hcl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">resource</span> <span class="s2">"aws_cloudwatch_event_target"</span> <span class="s2">"batch_target"</span> <span class="p">{</span>
  <span class="nx">rule</span> <span class="p">=</span> <span class="nx">aws_cloudwatch_event_rule</span><span class="err">.</span><span class="nx">s3_event_rule</span><span class="err">.</span><span class="nx">name</span>
  <span class="nx">arn</span>  <span class="p">=</span> <span class="nx">aws_batch_job_queue</span><span class="err">.</span><span class="nx">my_job_queue</span><span class="err">.</span><span class="nx">arn</span>

  <span class="nx">input_transformer</span> <span class="p">{</span>
    <span class="nx">input_paths</span> <span class="p">=</span> <span class="p">{</span>
      <span class="s2">"bucket"</span> <span class="p">=</span> <span class="s2">"$.detail.requestParameters.bucketName"</span>
      <span class="s2">"key"</span>    <span class="p">=</span> <span class="s2">"$.detail.requestParameters.key"</span>
    <span class="p">}</span>

    <span class="nx">input_template</span> <span class="p">=</span> <span class="o">&lt;&lt;</span><span class="no">EOF</span><span class="sh">
{
  "jobName": "my-batch-job",
  "jobQueue": "${aws_batch_job_queue.my_job_queue.arn}",
  "jobDefinition": "${aws_batch_job_definition.my_job_definition.arn}",
  "containerOverrides": {
    "environment": [
      {"name": "S3_BUCKET", "value": &lt;bucket&gt;},
      {"name": "S3_KEY", "value": &lt;key&gt;}
    ]
  }
}
</span><span class="no">EOF
</span>  <span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="4-create-batch-job-definition">4. Create Batch Job Definition</h3>

<p>Finally, define the Batch job that will be triggered. The job definition should include the environment variables that are passed from the EventBridge Input Transformer.</p>

<div class="language-hcl highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nx">resource</span> <span class="s2">"aws_batch_job_definition"</span> <span class="s2">"my_job_definition"</span> <span class="p">{</span>
  <span class="nx">name</span> <span class="p">=</span> <span class="s2">"my-batch-job-definition"</span>

  <span class="nx">container_properties</span> <span class="p">=</span> <span class="nx">jsonencode</span><span class="err">(</span><span class="p">{</span>
    <span class="nx">image</span>      <span class="p">=</span> <span class="s2">"my-docker-image"</span>
    <span class="nx">vcpus</span>      <span class="p">=</span> <span class="mi">2</span>
    <span class="nx">memory</span>     <span class="p">=</span> <span class="mi">2048</span>
    <span class="nx">environment</span> <span class="p">=</span> <span class="p">[</span>
      <span class="p">{</span>
        <span class="nx">name</span>  <span class="p">=</span> <span class="s2">"S3_BUCKET"</span>
        <span class="nx">value</span> <span class="p">=</span> <span class="s2">"placeholder"</span> <span class="c1">// This will be overwritten by EventBridge</span>
      <span class="p">},</span>
      <span class="p">{</span>
        <span class="nx">name</span>  <span class="p">=</span> <span class="s2">"S3_KEY"</span>
        <span class="nx">value</span> <span class="p">=</span> <span class="s2">"placeholder"</span> <span class="c1">// This will be overwritten by EventBridge</span>
      <span class="p">}</span>
    <span class="p">]</span>
  <span class="p">}</span><span class="err">)</span>

  <span class="nx">type</span> <span class="p">=</span> <span class="s2">"container"</span>
<span class="p">}</span>
</code></pre></div></div>

<h3 id="5-finalize-and-deploy">5. Finalize and Deploy</h3>

<p>With the S3 bucket, EventBridge rule, Input Transformer, and Batch job definition in place, you can deploy your infrastructure using Terraform. Make sure you apply the configuration and monitor the process to ensure everything is working as expected.</p>

<h2 id="conclusion">Conclusion</h2>

<p>By following this setup, you can efficiently pass properties from S3 events to AWS Batch jobs using EventBridge. This method is powerful for building event-driven architectures that respond dynamically to changes in your S3 buckets. With Terraform, you ensure that your infrastructure is easily manageable and reproducible.</p>

<p>For more detailed information on the specific configurations and additional features, check out the official <a href="https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html#cwe-input-transformer">AWS documentation on EventBridge and Batch integration</a>.</p>

<p>Happy automating!</p>]]></content><author><name></name></author><category term="aws" /><category term="IaC" /><category term="s3" /><category term="batch" /><category term="terraform" /><summary type="html"><![CDATA[Passing Properties from S3 to Batch via EventBridge]]></summary></entry><entry><title type="html">CloudFormation vs. Terraform in 2024</title><link href="/2024/03/18/CFN-vs-TF.html" rel="alternate" type="text/html" title="CloudFormation vs. Terraform in 2024" /><published>2024-03-18T07:00:30+00:00</published><updated>2024-03-18T07:00:30+00:00</updated><id>/2024/03/18/CFN-vs-TF</id><content type="html" xml:base="/2024/03/18/CFN-vs-TF.html"><![CDATA[<h1 id="an-experts-insight">An “Expert’s” Insight</h1>

<h2 id="table-of-contents">Table of Contents</h2>

<ol>
  <li><a href="#preface">Preface</a></li>
  <li><a href="#ecosystem">Ecosystem</a></li>
  <li><a href="#compatibilitysupport">Compatibility/Support</a></li>
  <li><a href="#opensource-vs-licencing-vs-native">OpenSource vs. Licencing vs. Native</a></li>
  <li><a href="#speed">Speed</a></li>
  <li><a href="#associated-toolsets--accessories">Associated Toolsets &amp; Accessories</a></li>
  <li><a href="#programming-and-dev-experience">Programming and Dev Experience</a></li>
  <li><a href="#future-of-both-tools">Future of Both Tools</a></li>
  <li><a href="#conclusion">Conclusion</a></li>
</ol>

<h2 id="preface">Preface</h2>

<p>My journey with IaC began in 2018, focusing exclusively on CloudFormation. Over the years, I honed my skills to become a Subject Matter Expert in CloudFormation while working at AWS. This involved a lot of things, mainly ,providing complex work evidence of solving CFN issues for customers and also passing a board with the CFN dev team in Seattle. However, since 2022, Terraform has beeen the go-to tool for all things IaC where I work.</p>

<p>This was quite a change, and while I still use CFN for a lot of my personal projects I’ve been using Terraform almost entirely in work apart from a few SAM stacks deployed via Terraform modules.</p>

<p>This transition offered me a unique perspective on both tools, and in this post, I’ll share my insights, comparing CloudFormation and Terraform across various dimensions when deploying infra to AWS. I’ll also show why I think CloudFormation is improving and closing the gap on Terraform and what I think is to come from both over the next few years.</p>

<h3 id="ecosystem">Ecosystem</h3>

<p><strong>CloudFormation:</strong> As a native AWS service, CloudFormation is tightly integrated with AWS, providing out-of-the-box support for almost every AWS resource. Its ecosystem is robust, with a wealth of templates and a supportive community. However, it can be somewhat insular, primarily catering to AWS resources.</p>

<p><strong>Terraform:</strong> Terraform’s ecosystem is expansive, supporting multiple providers beyond just AWS. Its modular approach, with the use of Terraform Registry, offers a vast collection of modules contributed by the community, enhancing its adaptability and extensibility across various cloud platforms and services.</p>

<h3 id="compatibilitysupport">Compatibility/Support</h3>

<p><strong>CloudFormation:</strong> CloudFormation launched back in 2011 and whether you view this as a pro or con any template written on Day 1 of CloudFormation can still be deployed today without issues or errors. For example, when working older, more mature code bases this can be a benefit during updates. AWS will always support that JSON or YAML template.</p>

<p>Whereas with Terraform, when you go to revisit your long running RDS DB that’s reaching EOL support, you’ll be faced with major upgrade changes for the <a href="https://github.com/hashicorp/terraform-provider-aws/issues/29842">terraform-provider-aws</a>. Similarly, your <code class="language-plaintext highlighter-rouge">template_files</code> may no longer be supported and you need to spend time updating code to use the newer <a href="https://developer.hashicorp.com/terraform/language/functions/templatefile">templatefile</a> function. There’ll always be that management overhead when using Terraform.</p>

<p>Some other issues I’ve noticed creeping up recently are with resource AWS support and updates from Terraform.</p>

<p>At re:invent 2023 we got <a href="https://aws.amazon.com/about-aws/whats-new/2023/11/aws-config-periodic-recording/">periodic recording for Config</a> on November 26th, this was supposed to come with OOTB support from CloudFormation via the <code class="language-plaintext highlighter-rouge">RecordingMode</code> parameter, we didn’t get it that night but had to wait until December 11th via <a href="https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/1861">1861</a> not too long considering it’s AWS’ busiest week of the year and a lot of staff are in Vegas:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Type: AWS::Config::ConfigurationRecorder
Properties:
[...]
  RecordingGroup: 
    RecordingGroup
  RecordingMode: 
    RecordingMode
[...]
</code></pre></div></div>

<p>But it took a whopping 3 months to reach Terraform support on February 23rd 2024 when it was delivered via <a href="https://github.com/hashicorp/terraform-provider-aws/issues/34577#issuecomment-1961262081">34577</a>!</p>

<p>Similarly, with CodePipeline we got the <a href="https://aws.amazon.com/about-aws/whats-new/2023/11/aws-codepipeline-pipeline-execution-source-revision-overrides/">V2 Pipeline Type</a> for re:Invent week on November 22, 2023. This allows you to use a couple of new features but most importantly allowed you to avail of cheaper Pipeline costs via the new type. This came with same day support via CloudFormation <code class="language-plaintext highlighter-rouge">PipelineType</code> <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html#cfn-codepipeline-pipeline-pipelinetype">parameter</a>.</p>

<p>This was brought to Terraform in January 2024 via <a href="https://github.com/hashicorp/terraform-provider-aws/issues/34122">34122</a>.</p>

<p>I’m not complaining about having to wait here and understand that the aws terraform provider is maintained by a <a href="https://hashicorp.github.io/terraform-provider-aws/">small team</a>, I just suspected these changes to be implemented at a quicker rate with the maturitity of the AWS provider. The speed of feature delivery is food for though when using these tools at scale.</p>

<p>Another example of gaps in support I was surprised to see were with data sources. To this day there’s no support for HTTP API Gateway custom domain name data blocks in Terraform, I suspect shortly after publishing this though it will be provided via <a href="https://github.com/hashicorp/terraform-provider-aws/issues/36027">36027</a>:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>data "aws_apigatewayv2_domain_name" "example" {
  domain_name = "api.example.com"
}
</code></pre></div></div>

<p>Some things Terraform should be commended on though, is its support for multiple providers and tooling. For example, if using CloudFormation and mTLS with API Gateway your only real native option is to use ACM Private CA to issue certificates with a <strong>checks AWS Bill</strong> cost of $400/month per certificate. If you’re using Terraform you can use the <a href="https://registry.terraform.io/providers/hashicorp/tls/latest/docs">TLS Provider</a> and self sign certs for your API gateways truststore:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>###############################################
# S3 bucket and TLS certificate for truststore
###############################################

resource "aws_s3_bucket" "truststore" {
  bucket = "${random_pet.this.id}-truststore"
  #  acl    = "private"
}

resource "aws_s3_object" "truststore" {
  bucket                 = aws_s3_bucket.truststore.bucket
  key                    = "truststore.pem"
  server_side_encryption = "AES256"
  content                = tls_self_signed_cert.example.cert_pem
}

resource "tls_private_key" "private_key" {
  algorithm = "RSA"
}

resource "tls_self_signed_cert" "example" {
  is_ca_certificate = true
  private_key_pem   = tls_private_key.private_key.private_key_pem

  subject {
    common_name  = "example.com"
    organization = "ACME Examples, Inc"
  }

  validity_period_hours = 12

  allowed_uses = [
    "cert_signing",
    "server_auth",
  ]
}
</code></pre></div></div>

<p>The equivalent to the above is a lambda-backed custom resource in CFN, which would not be graceful at all. Using a mix of resource providers is where Terraform can outshine CloudFormation when deploying to AWS on occasions.</p>

<h3 id="opensource-vs-licencing-vs-native">OpenSource vs. Licencing vs. Native</h3>

<p>This is one the hottest topics recently in regards to CloudFormation vs. Terraform. In the past, a very strong argument to use Terraform was that it’s “open source”. However in 2023, HashiCorp threw a curve ball by adopting a <a href="https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license">Business Source Licence</a>. I won’t go in to what’s involved in all of that in this post but you can think of services that used to be free and aren’t anymore.</p>

<p>Further, this sparked the creation of <a href="https://opentofu.org/">OpenTofu</a> a truly open-source alternative. So with a lack of transparency from HashiCorp you’re faced with continuing to use Terraform and hope it remains free or seriously consider OpenTofu - not exactly a trivial task and something you want your dev team discussing. Anyone using CloudFormation won’t be having this conversation.</p>

<p>On the CloudFormation side, back in the day CloudFormation was a completely closed shop and owned by a dev team who were responsible for implementing all of the AWS’ resources in to CFN. As the number of services AWS created grew this became <a href="https://aws.amazon.com/blogs/devops/cloudformation-coverage/">unsustainable</a>, especially with 2 pizza teams - it was difficult to stay consistent.</p>

<p>Today, there’s still a CFN service team in AWS but they’ve built a platform where on launch other Service Teams in AWS must write their own provider code for CFN. This makes things consistent across all of the teams.</p>

<p>While I don’t see the CloudFormation engine ever becoming fully open-source a lot of the resource providers have been made available on GitHub <a href="https://github.com/aws-cloudformation/resource-providers-list?tab=readme-ov-file">here</a> which is a welcome change to having to contact AWS Support directly for all resource issues and having no transparency in the past.</p>

<p>There’s also the <a href="https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/projects/1">cloudformation-coverage-roadmap</a> which provides an amicable amount of transparency in to what is and isn’t coming to CloudFormation.</p>

<h3 id="speed">Speed</h3>

<p>Speed of deployment has been talked about a lot in CFN vs. TF but I haven’t seen any legitimate benchmark data.</p>

<p>From personal use though, I do feel that Terraform is quicker to deploy. Especially for specific AWS resources such as CloudFront, R53 and IAM updates. However, some things do just take a while. For example, with ECS Services I know that CloudFormation won’t mark the resource as <code class="language-plaintext highlighter-rouge">CREATE_COMPLETE</code> until a task has reached the <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html">steady state</a> it pings DescribeTask calls to achieve this which is known as Resource Stabilization.</p>

<p>Terraform feels quicker to re-deploy failed deployments too as it doesn’t rollback each resource similar to the way CloudFormation does. Also with Terraform you’d traditionally fail quicker if a resource type or variable did not match an expected value. With CloudFormation you would have only hit this at Stack Creation.</p>

<p>However, just this week CloudFormation announced the following - <a href="https://aws.amazon.com/about-aws/whats-new/2024/03/aws-cloudformation-40-percent-faster-stack-creation/">Experience up to 40% faster stack creation with AWS CloudFormation</a>, I’m yet to experiment with this new <code class="language-plaintext highlighter-rouge">CONFIGURATION_COMPLETE</code> state but it looks like Stabilization is being performed in parallel now.</p>

<p>So while Terraform can be faster to deploy resources, you mightn’t get that baked in stabilisation that comes with CloudFormation. Terraform does have better parallelism capabilities though.</p>

<h3 id="associated-toolsets--accessories">Associated Toolsets &amp; Accessories</h3>

<p>cdk migrate, tf lint, cfn lint, checkov, tfsec, aqua</p>

<p>Both tools here perform similarly in this regard with CloudFormation and Terraform both having tools to lint and enforce security best practice. <a href="https://github.com/aws-cloudformation/cfn-lint">cfn lint</a> and <a href="https://github.com/terraform-linters/tflint-ruleset-aws">tf lint</a> both offer similar rule sets. <code class="language-plaintext highlighter-rouge">cfn-guard</code> also feels a lot like <code class="language-plaintext highlighter-rouge">tfsec</code> although the latter has faced a rename recently from tfsec to trivy - again something we don’t see with CloudFormation are renames.</p>

<p>When using <a href="https://www.cncf.io/projects/open-policy-agent-opa/">Open Policy Agent</a> CloudFormation allows you to use hooks which can be finicky with OPA as hooks only support Python or Java. Because of this, you need the OPA CFN Hook which does work. If you really want to use OPA with Terraform you’ll need to pay for Terraform Cloud:</p>

<p><img src="/images/opal_tf.jpg" alt="OPA in Terraform" /></p>

<p><a href="https://github.com/bridgecrewio/checkov">checkov</a> is also useful for both CFN and TF and is free <em>for now</em>. The CDK also supports features for validation during synthesis time via <a href="https://github.com/cdklabs/cdk-validator-cfnguard">CfnGuardValidator</a> and supports OPA and Chekov.</p>

<h3 id="programming-and-dev-experience">Programming and Dev Experience</h3>

<p>The capabilities of HCL is where Terraform excels in comparison to CloudFormation. When using both these tools at scale you begin to face teething issues. While CloudFormation has a decent set of Pseudo parameters that can be used to enable multi-account, multi-region deploys such as <code class="language-plaintext highlighter-rouge">AWS::AccountId</code> &amp; <code class="language-plaintext highlighter-rouge">AWS::Region</code> the Intrinsic Functions also work but aren’t a breeze to play with. They have expanded on these with recent additions via the <code class="language-plaintext highlighter-rouge">AWS::LanguageExtensions</code> transforms but again it’s a little convoluted.</p>

<p>When using CFN at scale with Nested Stacks you’ll also face issues with Imports/Exports and Cross-Stack references and dependencies. I’m sure anyone reading this will wince at the following error:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>StackA Export StackA:ExportsOutputRefMyTableCD79AAA0A1504A18 cannot be deleted as it is in use by StackB
</code></pre></div></div>

<p>I would recommend avoiding these completely and using Parameter Store and <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/dynamic-references.html">dynamic references</a> for non-sensitive values.</p>

<p>Terraforms development experience trumps CloudFormation here considerably. From my time working with tf there’s something useful there when you need to reference an existing resource or something else. For example, <code class="language-plaintext highlighter-rouge">data</code> sources for references, modules for re-usabilility, <code class="language-plaintext highlighter-rouge">outputs</code> for debugging and referencing values and the list goes on.</p>

<p><code class="language-plaintext highlighter-rouge">dynamic</code> Blocks can also be extremely useful within modules when aiming to create slightly different variations of resources but re-use the same code. I haven’t even mentioned <code class="language-plaintext highlighter-rouge">for_each</code> or <code class="language-plaintext highlighter-rouge">count</code>. The list of built-in <a href="https://developer.hashicorp.com/terraform/language/functions">functions</a> is endless too. You’ve also got the random provider to take care of dynamic placement of resources in subnets:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>data "aws_availability_zones" "available" {
state = "available"
}

resource "random_shuffle" "az" {
input = data.aws_availability_zones.available.names
result_count = 3
}
</code></pre></div></div>

<p>CloudFormation definitely comes out second in this contest as the more primitive tool. For more “dev” DevOps Engineers and Software Engineers working at scale, Terraform will naturally be the preffered choice here.</p>

<h3 id="future-of-both-tools">Future of Both Tools</h3>

<p><strong>CloudFormation:</strong> As AWS continues to evolve, CloudFormation is expected to remain a key player, potentially expanding its capabilities and integration with other AWS services. Its future likely includes enhancements in usability and template management. With the additional abstraction of the CDK, I wonder will we ever see a rebuilt CFN engine that doesn’t translate go, typescript or python code back in to JSON or YAML templates like the CDK does - but today this is where we’re at.</p>

<p><strong>Terraform:</strong> I’m not sure exactly where Terraform’s trajectory is going to go. With the introduction of Terraforms licencing, the recent creation of OpenTofu and the delays in introducing new resources I’m beginning to think that Terraform are going to push on Terraform Cloud and begin to cash in on what a great tool it has been over the years. I would not be surprised to see them enforce billing on Enterprise customers or be acquired this year. Having said that, I think they will look to develop towards broader ecosystem support and implement more providers.</p>

<h2 id="conclusion">Conclusion</h2>

<p>Both CloudFormation and Terraform have their strengths and areas for improvement. My personal experience has shown me the nuances of each, and I believe the choice between them often depends mainly on the projects needs and organizational context. For example, anything multi-cloud you’re not choosing CloudFormation.</p>

<p>I like to think that within an enterprise you can use both. Either or. As long as there’s the correct tooling in place to use Terraform well and in a consistent manner. This is where the toolsets and accessories become more important.</p>

<p>I also wouldn’t turn anyone away from using the <code class="language-plaintext highlighter-rouge">aws_cloudformation_stack</code> <a href="https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cloudformation_stack">terraform resource</a>, while it may appear to be an antipattern it can combine the best of worlds at times.</p>

<p>Having said that, with the recent improvements CloudFormation has made and the changes in licencing and uncertainty from Terraform. I would have to crown the long term IaC winner in my opinion as … CloudFormation.</p>]]></content><author><name></name></author><category term="aws" /><category term="IaC" /><category term="cloudformation" /><category term="terraform" /><summary type="html"><![CDATA[An “Expert’s” Insight]]></summary></entry><entry><title type="html">Lima, Peru</title><link href="/travel/southamerica/peru/2023/07/09/Lima.html" rel="alternate" type="text/html" title="Lima, Peru" /><published>2023-07-09T07:00:30+00:00</published><updated>2023-07-09T07:00:30+00:00</updated><id>/travel/southamerica/peru/2023/07/09/Lima</id><content type="html" xml:base="/travel/southamerica/peru/2023/07/09/Lima.html"><![CDATA[]]></content><author><name></name></author><category term="travel" /><category term="SouthAmerica" /><category term="Peru" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Cuyabeno Rainforest, Ecuador</title><link href="/travel/southamerica/ecuador/2023/07/04/Cuyabeno.html" rel="alternate" type="text/html" title="Cuyabeno Rainforest, Ecuador" /><published>2023-07-04T07:00:30+00:00</published><updated>2023-07-04T07:00:30+00:00</updated><id>/travel/southamerica/ecuador/2023/07/04/Cuyabeno</id><content type="html" xml:base="/travel/southamerica/ecuador/2023/07/04/Cuyabeno.html"><![CDATA[]]></content><author><name></name></author><category term="travel" /><category term="SouthAmerica" /><category term="Ecuador" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Quito, Ecuador</title><link href="/travel/southamerica/ecuador/2023/07/03/Quito.html" rel="alternate" type="text/html" title="Quito, Ecuador" /><published>2023-07-03T07:00:30+00:00</published><updated>2023-07-03T07:00:30+00:00</updated><id>/travel/southamerica/ecuador/2023/07/03/Quito</id><content type="html" xml:base="/travel/southamerica/ecuador/2023/07/03/Quito.html"><![CDATA[<p>Quito, Ecuador’s capital city, was my first ‘real’ stop on my South American journey. I arrived in Quito from Bogota on a red eye.</p>

<p>One of the first things I noticed about Quito was the altitude. Standing at an elevation of about 2,850 meters (9,350 ft), it’s the second-highest official capital city in the world.</p>

<p>The altitude was definitely something to get used to, and I even with my level of fitness and countless Altitude Classes @ AltiPeak on the long mile road, I still found myself a small bit short of breathe when walking up the steep hills. But within a day or so, my body started adjusting to the height.</p>

<p>We went on the Hostels local walking tour the next morning. This was a great way to see some of the city’s most iconic sights, and get a feel for the culture. Our guide gave us the low-down on why half of the buildings were never fully complete (you only pay taxes on the building if it’s complete) and he also brought us around to some of the best food spots in the city. ADD MORE HERE</p>

<p><img src="/images/market_quito.jpg" alt="Image from Quito's food market" /></p>

<p>We stayed at the Secret Garden Hostel, full of backpackers’ with great views of the city skyline. Staff were friendly, the food was good and they give a 50% discount on food and drink if you leave a review - this was availed of in full.</p>

<p><img src="/images/secret_garden_view.jpg" alt="Image from Secret Garden Hostel" /></p>

<p>Quito was a great place to start and the walking tour from the Hostel was a must do. TBC…..</p>

<p>Next stop was to catch a night bus to Cuyabeno which is at the foot of the Amazon Rainforest in Ecuador.</p>]]></content><author><name></name></author><category term="travel" /><category term="SouthAmerica" /><category term="Ecuador" /><summary type="html"><![CDATA[Quito, Ecuador’s capital city, was my first ‘real’ stop on my South American journey. I arrived in Quito from Bogota on a red eye.]]></summary></entry><entry><title type="html">Automating AWS Instances with Ansible</title><link href="/jekyll/update/2020/02/09/Managing-AWS-Instances-With-Ansible.html" rel="alternate" type="text/html" title="Automating AWS Instances with Ansible" /><published>2020-02-09T22:12:30+00:00</published><updated>2020-02-09T22:12:30+00:00</updated><id>/jekyll/update/2020/02/09/Managing-AWS-Instances-With-Ansible</id><content type="html" xml:base="/jekyll/update/2020/02/09/Managing-AWS-Instances-With-Ansible.html"><![CDATA[<p>This post describes the deployment of an aws ami rstudio and it’s management with cron jobs</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>aws ec2 describe-instances
---
- name: Get EC2 RStudio Instance Info
  connection: localhost
  command: aws ec2 describe-instances
  register: my_instance_facts
aws ec2 describe-instances | grep PublicDnsName
---

- name: return rstudio public dns
  gather_facts: false
  hosts: localhost
  tasks:
    - command: aws ec2 describe-instances
      register: instance_facts
aws ec2 start-instances --instance-ids i-0c46a927f7a75XXXX
---

- name: Start rstudio instance
  gather_facts: false
  hosts: localhost
  tasks:
    - command: aws ec2 start-instances --instance-ids i-0c46a927f7a75XXXX
aws ec2 stop-instances --instance-ids i-0c46a927f7a75XXXX
---

- name: Stop rstudio instance
  gather_facts: false
  hosts: localhost
  tasks:
    - command: aws ec2 stop-instances --instance-ids i-0c46a927f7a75XXXX
</code></pre></div></div>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[This post describes the deployment of an aws ami rstudio and it’s management with cron jobs]]></summary></entry><entry><title type="html">FAI U17 Youth Challenge Cup Final</title><link href="/jekyll/update/2019/04/30/FAI-Youth-Cup-Final.html" rel="alternate" type="text/html" title="FAI U17 Youth Challenge Cup Final" /><published>2019-04-30T22:12:30+00:00</published><updated>2019-04-30T22:12:30+00:00</updated><id>/jekyll/update/2019/04/30/FAI-Youth-Cup-Final</id><content type="html" xml:base="/jekyll/update/2019/04/30/FAI-Youth-Cup-Final.html"><![CDATA[<p><img src="/assets/FAI New Balance U17 Challenge Cup Final St Kevin's Boys No 24.jpg" alt="Blarney Red card" />
St. Kevin’s Boys vs. Blarney United</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[St. Kevin’s Boys vs. Blarney United]]></summary></entry><entry><title type="html">First Post!</title><link href="/jekyll/update/2018/08/28/first-post.html" rel="alternate" type="text/html" title="First Post!" /><published>2018-08-28T10:00:27+00:00</published><updated>2018-08-28T10:00:27+00:00</updated><id>/jekyll/update/2018/08/28/first-post</id><content type="html" xml:base="/jekyll/update/2018/08/28/first-post.html"><![CDATA[<p>Hi,</p>

<p>This is the first post on my new webpage. This will be the place where I’ll be posting some updates publicly to track my web dev skills over time.</p>

<p>This post may be changed periodically or even removed over time.</p>]]></content><author><name></name></author><category term="jekyll" /><category term="update" /><summary type="html"><![CDATA[Hi,]]></summary></entry></feed>