<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Kevin Peterson</title>
        <link>https://kevinpeterson.me</link>
        <description>Kevin Peterson's blog</description>
        <lastBuildDate>Thu, 12 Feb 2026 01:12:43 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        
        <copyright>All rights reserved Kevin Peterson 2026</copyright>
        <item>
            <title><![CDATA[A Practical Look at the Pipeline Pattern in Python]]></title>
            <link>https://kevinpeterson.me/blogs/a-practical-look-at-the-pipeline-pattern-in-python</link>
            <guid>https://kevinpeterson.me/blogs/a-practical-look-at-the-pipeline-pattern-in-python</guid>
            <pubDate>Tue, 09 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[How the pipeline pattern helps structure Python code by chaining small, focused processing steps.]]></description>
            <content:encoded><![CDATA[
As Python code grows, functions tend to accumulate responsibilities.

One function fetches data.  
Another cleans it.  
Another transforms it.  
Soon, everything is tangled together.

The **pipeline pattern** solves this by breaking work into **small, composable steps** that pass data forward in a predictable way.

---

## What the Pipeline Pattern Is

At its core, the pipeline pattern is simple:

- Each step does one thing
- Each step receives input and returns output
- Steps are chained together
- No step knows about the entire process

Data flows forward. Logic stays isolated.

---

## Why Pipelines Work Well in Python

Python makes pipelines easy because:
- Functions are first-class
- Iterables are flexible
- Generators are lightweight

Pipelines help when:
- You process data in stages
- Each stage is easy to test
- You want to change steps without rewriting everything

---

## A Simple Pipeline Example

Imagine processing user input before saving it.

Each step handles one concern.

```python
def normalize(text):
    return text.strip().lower()

def remove_punctuation(text):
    return ''.join(c for c in text if c.isalnum() or c.isspace())

def tokenize(text):
    return text.split()
```

Now we can chain them:

```python
def run_pipeline(value, steps):
    for step in steps:
        value = step(value)
    return value

steps = [normalize, remove_punctuation, tokenize]
result = run_pipeline("  Hello, World!  ", steps)
```

Each function stays small and focused.

---

## Pipelines Improve Testability

Because each step is independent:
- You can test steps in isolation
- You don’t need complex setup
- Failures are easier to locate

Example:

```python
def test_normalize():
    assert normalize(" Hi ") == "hi"
```

Small tests scale better than one giant test for everything.

---

## When to Use the Pipeline Pattern

The pipeline pattern works best when:
- Processing happens in stages
- Steps are reusable
- Order matters

Common use cases:
- Data processing
- ETL jobs
- Input validation
- Text processing

---

## When Pipelines Are a Bad Fit

Pipelines are not ideal when:
- Steps need shared mutable state
- Logic branches heavily
- Performance requires tight loops without abstraction

Not every problem needs a pipeline.

---

## Final Thoughts

The pipeline pattern isn’t about being clever.

It’s about:
- Clear data flow
- Small, readable functions
- Code that’s easy to change

If a function starts doing too much, it might be time to turn it into a pipeline.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Building a Resilient Background Worker in Elixir (Without Overthinking It)]]></title>
            <link>https://kevinpeterson.me/blogs/building-a-resilient-background-worker-in-elixir</link>
            <guid>https://kevinpeterson.me/blogs/building-a-resilient-background-worker-in-elixir</guid>
            <pubDate>Wed, 11 Dec 2024 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical walkthrough of GenServer, supervision, and failure handling with real-world Elixir code.]]></description>
            <content:encoded><![CDATA[
Background jobs sound simple — until they aren’t.

Retries pile up.  
One failure takes down the whole worker pool.  
Jobs disappear silently or run twice.

What I like about Elixir is that it encourages you to **design for failure early**, instead of patching it later.

This post walks through a **simple but production-friendly background worker** using GenServer and supervision — no frameworks, no magic.

![BEAM VM Architecture](/images/blogs/EVM-virtual-machine.png)

---

## The Mental Model: Small Processes, Clear Responsibility

In Elixir, the goal isn’t to create one “smart” worker.  
It’s to create **many small, replaceable processes**.

Each process should:
- Do one thing
- Fail loudly if it can’t
- Be restarted automatically

This is the foundation of fault tolerance on the BEAM.

![Elixir Supervision Tree](/images/blogs/elixir-supervision-tree.png)

---

## Step 1: Define the Worker (GenServer)

Let’s start with a worker that processes a single job and exits.

```elixir
defmodule MyApp.Worker do
  use GenServer
  require Logger

  ## Public API

  def start_link(job) do
    GenServer.start_link(__MODULE__, job)
  end

  ## Callbacks

  @impl true
  def init(job) do
    send(self(), :process)
    {:ok, job}
  end

  @impl true
  def handle_info(:process, job) do
    case perform(job) do
      :ok ->
        Logger.info("Job completed successfully")
        {:stop, :normal, job}

      {:error, reason} ->
        Logger.error("Job failed: #{inspect(reason)}")
        {:stop, reason, job}
    end
  end

  defp perform(_job) do
    if :rand.uniform() > 0.7 do
      :ok
    else
      {:error, :random_failure}
    end
  end
end

```
### Key Ideas

- The worker does its job and exits
- Success and failure are explicit
- No retry logic inside the worker

---

## Step 2: Supervise the Worker

Now we introduce a supervisor to manage worker lifecycles.

```elixir
defmodule MyApp.WorkerSupervisor do
  use DynamicSupervisor

  def start_link(_) do
    DynamicSupervisor.start_link(__MODULE__, :ok, name: __MODULE__)
  end

  @impl true
  def init(:ok) do
    DynamicSupervisor.init(strategy: :one_for_one)
  end

  def start_job(job) do
    spec = {MyApp.Worker, job}
    DynamicSupervisor.start_child(__MODULE__, spec)
  end
end
```

### Why This Works Well

- Each job runs in isolation
- One crash doesn’t affect others
- Restart behavior is consistent and observable

## What Happens When a Job Fails?

When `perform/1` fails:

- The GenServer crashes
- The supervisor handles cleanup
- Logs clearly show what happened

No silent retries.  
No hidden state.  
No cascading failures.

This is one of the biggest advantages of building background work directly on the BEAM.

---

## When I Use This Pattern

This approach works well for:

- API-triggered async work
- Data enrichment
- External service calls
- Internal tooling and pipelines

I wouldn’t use it for:

- Massive job queues
- Exactly-once delivery
- Persistent retries without storage

Elixir gives you primitives — not opinions.

## Final Thoughts

Elixir didn’t just give me better concurrency.  
It changed how I think about **failure as a design input**.

Small processes.  
Clear supervision.  
Predictable recovery.

When production gets boring, you’re doing it right.

---


---

]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Deploying JavaScript Applications]]></title>
            <link>https://kevinpeterson.me/blogs/deploying-javascript-applications</link>
            <guid>https://kevinpeterson.me/blogs/deploying-javascript-applications</guid>
            <pubDate>Wed, 04 May 2016 00:00:00 GMT</pubDate>
            <description><![CDATA[Practical guidance on organizing, building, and deploying JavaScript applications without unnecessary complexity.]]></description>
            <content:encoded><![CDATA[
Deploying JavaScript applications used to be simple, and in many ways, it still should be.

Most deployment problems don’t come from JavaScript itself—they come from poor structure, unclear responsibilities, and treating deployment as an afterthought.

This post outlines a few principles that help keep JavaScript deployments predictable and manageable.

---

## Treat JavaScript as a Build Artifact

JavaScript source files are not what you deploy.

What you deploy is the **output**:
- Combined files
- Minified code
- Versioned assets

Source code exists for developers. Deployed code exists for browsers.

Keeping that distinction clear makes debugging production issues much easier.

---

## Organize Code Around Responsibility

Files should be grouped by what they do, not how they were written.

Good JavaScript structure makes it obvious:
- What runs on page load
- What enhances user interaction
- What can safely fail

Example structure:

```js
js/
  app.js
  dom.js
  events.js
  ajax.js
```

If one file fails, the rest of the application should still function.

---

## Avoid Global State

Global variables make deployment fragile.

When multiple scripts depend on shared globals:
- Load order matters
- Partial failures cascade
- Debugging becomes painful

Encapsulate functionality and expose only what’s necessary.

---

## Minify and Combine at Deploy Time

Write JavaScript for humans first.

Minification, concatenation, and compression should happen **during deployment**, not during development.

This keeps source readable while ensuring deployed code is fast and efficient.

---

## Version Your Assets

Browsers cache aggressively. That’s a good thing—until you deploy new code.

Deployed JavaScript should always be versioned or fingerprinted so clients never receive a mix of old and new files.

Example:

```js
app.4f3a9c.js
```

This avoids an entire class of hard-to-reproduce bugs.

---

## Test What You Deploy

Don’t assume deployed code behaves the same as source code.

Always test:
- Minified output
- Combined files
- Cached assets

Many bugs only appear after the build step.

---

## Keep Deployment Boring

A good JavaScript deployment process is:
- Repeatable
- Predictable
- Uninteresting

If deploying feels stressful, the problem is usually structure, not tooling.

---

## Final Thoughts

JavaScript deployment doesn’t need to be complicated.

Clear structure, explicit build steps, and predictable outputs go a long way.

The goal isn’t cleverness—it’s confidence.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Making Generated JavaScript Inline-Caching Friendly in ReScript]]></title>
            <link>https://kevinpeterson.me/blogs/making-generated-javascript-inline-caching-friendly-in-rescript</link>
            <guid>https://kevinpeterson.me/blogs/making-generated-javascript-inline-caching-friendly-in-rescript</guid>
            <pubDate>Sat, 22 Aug 2020 00:00:00 GMT</pubDate>
            <description><![CDATA[How ReScript uses types and predictable data shapes to generate faster, more debuggable JavaScript.]]></description>
            <content:encoded><![CDATA[
In this post, I want to explain how ReScript (formerly BuckleScript) improves JavaScript performance by using ```type information to generate predictable runtime representations```.

Modern JavaScript engines like V8 heavily optimize based on object shapes. When those shapes stay consistent, engines can apply ```inline caching```, which leads to significant performance gains.

ReScript leans into this by ensuring the JavaScript it generates is both ```idiomatic``` and ```engine-friendly```.

---

## Why Encoding Matters

JavaScript engines optimize property access based on how objects look at runtime.

If object shapes change often:
- Inline caches break
- Property access slows down
- Debugging becomes harder

ReScript avoids this by guaranteeing:
- Similar data always has the same shape
- Variant representations are predictable
- Generated JavaScript remains readable

---

## Record Encoding (Stable Objects)

Records in ReScript compile directly to JavaScript objects.

Example ReScript record:

```ocaml
type int64 = { loBits: int [@bs.as "lo"], hiBits: int [@bs.as "hi"] }
let value = { hiBits: 33, loBits: 32 }
let sum = ({ loBits, hiBits }) => loBits + hiBits
```  
Generated JavaScript looks like:

```js
var value = { lo: 32, hi: 33 }
function sum(param) { return param.lo + param.hi }
```

Why this works well:
- Object shapes are consistent
- Property names are stable
- Debuggers show meaningful structures

---

## Variant Encoding (Tagged Unions)

Variants are encoded using different strategies depending on whether they carry data.

### Variants Without Payload

These compile to simple numbers.

Rescript:

```ocaml
type color = | Red | Blue | Green
```

Generated JavaScript:

```js
Red = 0, Blue = 1, Green = 2
```

This representation is compact and fast.

---

### Variants With Payload

Variants that carry data compile to objects with a stable layout.

ReScript:

```ocaml
type tree = | Empty | Node(tree, int, tree)
let t = Node(Empty, 3, Empty)
```

Generated JavaScript:

```js
var t = { TAG: 0, _0: 0, _1: 3, _2: 0 }
```

Each object:
- Uses the same fields
- Preserves stable shapes
- Works well with inline caching

---

## Optimized Variant Encoding (Single Payload)

When only one variant carries data, ReScript skips the TAG field entirely.

ReScript:

```ocaml
type list = | Nil | Cons(int, list)
let xs = Cons(1, Nil)
```

Generated JavaScript:

```js
var xs = { _0: 1, _1: 0 }
```

This reduces object size while keeping pattern matching predictable.

---

## List Encoding for Debugging

Lists can also be encoded using clearer field names:

```ocaml
{ hd: 1, tl: { hd: 2, tl: { hd: 3, tl: 0 } } }
```

This improves debuggability while remaining efficient for JavaScript engines.

---

## Polymorphic Variants

Polymorphic variants also generate stable shapes.

ReScript:

```ocaml
let v = `hello(3)
```

Generated JavaScript:

```js
var v = { HASH: someNumber, VAL: 3 }
```

These shapes remain consistent and engine-optimizable.

---

## Final Thoughts

ReScript’s approach to encoding data is about more than performance. It also improves:
- Debugging experience
- Predictability
- Confidence in refactors

By combining strong typing with JavaScript-friendly output, ReScript produces code that runs fast and stays understandable.

This balance is what makes ReScript especially appealing for long-lived, production frontend systems.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Practical CSS Tips for Hotwire Apps (That Keep Things Boring)]]></title>
            <link>https://kevinpeterson.me/blogs/practical-css-tips-for-hotwire-apps</link>
            <guid>https://kevinpeterson.me/blogs/practical-css-tips-for-hotwire-apps</guid>
            <pubDate>Sun, 02 Apr 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[Simple, production-friendly CSS patterns that work well with Hotwire, Turbo, and server-rendered HTML.]]></description>
            <content:encoded><![CDATA[
One of the underrated benefits of Hotwire is how much frontend complexity simply disappears.

No heavy client-side state.  
No massive JavaScript bundles.  
Just HTML, CSS, and a small amount of JavaScript where it actually helps.

That simplicity changes how you should think about CSS. These are a few practical, production-tested patterns that work especially well with Hotwire and Turbo.

---

## Prefer Layout CSS Over JavaScript

If something can be solved with CSS, it probably should be.

Flexbox and Grid handle most layout needs without touching JavaScript, which makes Turbo-driven page updates faster and more predictable.

Good CSS-first choices include:
- Centering content with flexbox
- Using grid for page structure
- Letting media queries handle responsiveness

Less JavaScript means fewer edge cases when Turbo replaces parts of the DOM.

---

## Embrace Utility Classes (But Keep Them Boring)

Utility classes work extremely well in server-rendered applications.

Simple helpers like:
- `.flex`
- `.gap-sm`
- `.text-muted`
- `.hidden`

keep templates readable without introducing heavy component abstractions.

The key is restraint. A small, well-documented set of utilities beats a large, inconsistent one.

Example utility styles:

```css
.flex { display: flex; }
.gap-sm { gap: 0.5rem; }
.text-muted { color: #6b7280; }
.hidden { display: none; }
```

---

## Make Turbo Transitions Feel Natural

Turbo updates content instantly, which can feel abrupt without subtle visual cues.

CSS transitions help smooth things out:
- Fading content in after a frame update
- Animating height changes for expandable sections
- Adding hover and focus transitions to interactive elements

Example transition pattern:

```css
[data-turbo-frame] {
  transition: opacity 150ms ease-in;
}
```

You don’t need complex animations. Small transitions make the UI feel intentional without fighting Turbo.

---

## Use Data Attributes as Styling Hooks

Hotwire encourages behavior driven by data attributes. CSS can use the same signals.

Instead of toggling classes with JavaScript, you can target attributes directly.

Example:

```css
button[data-loading] {
  opacity: 0.6;
  pointer-events: none;
}
```

This keeps behavior declarative and avoids unnecessary JavaScript.

---

## How This Fits into Ruby on Rails

Hotwire works best when Rails stays in charge of rendering HTML.

Instead of pushing state management to JavaScript, Rails actions respond with:
- Full page renders
- Turbo Frames
- Turbo Streams

CSS becomes the glue that keeps those updates feeling smooth and intentional.

A typical Rails controller action might look like this:

```ruby
respond_to do |format|
  format.turbo_stream
  format.html
end
```

From a CSS perspective, the goal is simple:
- Don’t fight DOM replacement
- Avoid layout jumps
- Let the browser handle visual state

---

## Styling Turbo Streams Without Extra JavaScript

Turbo Streams update the DOM instantly. CSS can handle a surprising amount of polish without Stimulus.

Common Rails patterns that pair well with CSS:
- Flash messages after create or update
- Inline validation errors
- List items appearing after Turbo Stream inserts

Example CSS for stream inserts:

```css
[data-turbo-stream] {
  animation: fade-in 150ms ease-in;
}

@keyframes fade-in {
  from { opacity: 0; }
  to { opacity: 1; }
}
```

Rails sends HTML. CSS handles presentation. No extra JavaScript required.

---

## Rails Partials and Predictable CSS

Rails partials encourage small, reusable view fragments. CSS should mirror that structure.

Good habits:
- Scope styles to partials
- Avoid global selectors
- Use class names that reflect the partial’s responsibility

Example structure:

```ruby
app/views/posts/_post.html.erb
app/assets/stylesheets/posts.css
```

When Turbo swaps a partial, the CSS already knows how that fragment should behave.

---

## Stimulus Should Enhance, Not Replace CSS

Stimulus works best when it handles:
- Focus management
- Keyboard interactions
- Small state toggles

CSS should still handle:
- Layout
- Visibility
- Transitions

Example Stimulus + CSS balance:
- Stimulus toggles a data attribute
- CSS controls how the UI responds

```css
[data-expanded="false"] { display: none; }
[data-expanded="true"] { display: block; }
```

Rails stays simple. Stimulus stays small. CSS does the heavy lifting.

---

## Avoid Over-Styling Turbo Frames

Turbo Frames are structural, not visual.

A good rule of thumb:
- Style the content inside the frame
- Avoid styling the frame element itself

This prevents layout shifts when frames update and keeps the UI stable.

---

## Keep CSS Close to the View

Hotwire apps benefit from CSS that is easy to trace.

When styles live close to the views they affect:
- Changes are easier to reason about
- Refactors are safer
- Dead styles are easier to remove

Boring CSS is maintainable CSS.

---

## Final Thoughts

Hotwire encourages a simpler frontend model, and your CSS should reflect that.

Let the browser do the work.  
Use JavaScript sparingly.  
Keep styles predictable.

Rails apps live a long time.  
CSS that works with Hotwire should be easy to read, easy to delete, and hard to break.

When your CSS feels boring and obvious, you’re probably doing it right.]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Understanding to_a vs to_ary in Ruby (And Why It Matters)]]></title>
            <link>https://kevinpeterson.me/blogs/understanding-to-a-vs-to-ary-in-ruby</link>
            <guid>https://kevinpeterson.me/blogs/understanding-to-a-vs-to-ary-in-ruby</guid>
            <pubDate>Mon, 06 Feb 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical explanation of Ruby’s to_a and to_ary methods, when each is called, and how misuse can lead to subtle bugs.]]></description>
            <content:encoded><![CDATA[
At first glance, `to_a` and `to_ary` look like they do the same thing.

They don’t.

Both convert objects into arrays, but they exist for **very different reasons**, and confusing them can lead to subtle bugs—especially in Rails apps where implicit conversions happen all the time.

This post breaks down what each method is for, when Ruby calls them automatically, and how to use them safely.

---

## What `to_a` Is For

`to_a` is an **explicit conversion**.

It means:
> “If someone asks for an array, here’s how to represent this object as one.”

Ruby will only call `to_a` when **you explicitly ask for it**.

Example:

```ruby
range = (1..3)
range.to_a
```

Result:

```ruby
[1, 2, 3]
```

This is safe and predictable. Nothing magical happens behind the scenes.

In Rails, you’ll see `to_a` used when:
- Converting ActiveRecord relations
- Materializing lazy enumerables
- Making data serializable

---

## What `to_ary` Is For

`to_ary` is an **implicit conversion**.

It tells Ruby:
> “This object *is* an array.”

Because of that, Ruby will call `to_ary` **automatically** in situations where it expects an array.

Example:

```ruby
class Pair
  def to_ary
    [1, 2]
  end
end

pair = Pair.new
a, b = pair
```

Ruby calls `to_ary` without asking you.

This is powerful—but dangerous.

---

## Why `to_ary` Is Dangerous

When you define `to_ary`, Ruby may use it in places you didn’t expect:

- Multiple assignment
- Array destructuring
- Splat (`*`) operations
- Method argument expansion

Example:

```ruby
def takes_array(arr)
  arr.length
end

takes_array(pair)
```

If `pair` implements `to_ary`, Ruby silently treats it as an array.

This can:
- Hide bugs
- Break method contracts
- Make debugging harder

That’s why `to_ary` should be used **very rarely**.

---

## A Rails Example Where This Matters

Consider a Rails model that wraps multiple values:

```ruby
class Coordinates
  def initialize(x, y)
    @x = x
    @y = y
  end
end
```

If you add `to_ary` here, Rails helpers, form builders, or serializers may start treating it as a real array.

That’s usually not what you want.

If you want explicit conversion, use `to_a` instead:

```ruby
def to_a
  [@x, @y]
end
```

Now conversion only happens when you opt in.

---

## When You Should Use `to_ary`

Use `to_ary` only when:
- Your object truly *is* an array conceptually
- Implicit conversion makes the code clearer
- You fully understand where Ruby might call it

Common examples:
- Tuple-like objects
- Value objects meant for destructuring
- Internal DSLs with controlled usage

Even then, caution is warranted.

---

## When You Should Use `to_a`

Use `to_a` almost everywhere else.

It’s the right choice when:
- You want predictable behavior
- You’re working with collections or enumerables
- You don’t want Ruby guessing your intent

Rails itself heavily prefers explicit conversions for this reason.

---

## Final Thoughts

The difference between `to_a` and `to_ary` isn’t about arrays.

It’s about **intent**.

- `to_a` says: “Convert me if asked.”
- `to_ary` says: “Treat me as an array everywhere.”

In most Ruby and Rails codebases, explicit conversion wins.

If you ever feel unsure, don’t implement `to_ary`.  
That’s usually the right choice.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Why Elixir Changed How I Think About Backend Systems]]></title>
            <link>https://kevinpeterson.me/blogs/why-elixir-changed-how-i-think-about-backend-systems</link>
            <guid>https://kevinpeterson.me/blogs/why-elixir-changed-how-i-think-about-backend-systems</guid>
            <pubDate>Sun, 12 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Lessons from building concurrent, fault-tolerant systems with Elixir, the BEAM, and a few other tools along the way.]]></description>
            <content:encoded><![CDATA[
I didn’t start with Elixir because I wanted a new language.  
I started with Elixir because I was tired of fighting production issues that felt *structural*, not accidental.

Race conditions. Fragile background jobs. Systems that looked fine in staging but slowly fell apart under real traffic.

Elixir didn’t magically fix everything — but it **forced me to think differently** about how systems should behave when things go wrong.

---

## Concurrency as a First-Class Concept

Most platforms *support* concurrency.  
Elixir (and the BEAM) are **built around it**.

Instead of threads and shared memory, you get:
- Lightweight processes
- Message passing
- Isolation by default

This changes the question from  
> “How do I protect shared state?”  
to  
> “What state actually needs to exist?”

Once you model work as small, independent processes, systems become easier to reason about — and easier to scale.


---

## Letting Things Fail (On Purpose)

One of the hardest mindset shifts for me was accepting that **failure is normal**.

In many stacks, failure is treated as something to prevent at all costs.  
In Elixir, failure is expected — and *designed for*.

Supervision trees make this practical:
- A process crashes → supervisor restarts it
- A bad request doesn’t poison the system
- Recovery is automatic and predictable

This doesn’t mean you ignore errors.  
It means you **contain them**.

Production systems become calmer when failure paths are intentional instead of accidental.

---

## Phoenix, LiveView, and Real-Time Without Pain

Phoenix feels familiar at first — routing, controllers, templates.  
But where it really shines is real-time work.

With LiveView:
- State lives on the server
- UI updates are pushed efficiently
- You avoid large client-side complexity for many use cases

It’s not a replacement for React in every scenario.  
But for dashboards, internal tools, and real-time workflows, it’s hard to beat.

The key lesson for me wasn’t LiveView itself — it was realizing how much frontend complexity exists *only* because the backend can’t keep up.

---

## Elixir Isn’t the Only Tool (And That’s Okay)

I don’t use Elixir everywhere.

Some problems are better suited for:
- Python (ML, scripting, glue code)
- TypeScript (frontend-heavy products)
- Go (simple, static binaries)

What Elixir taught me wasn’t “always use Elixir.”  
It taught me to **choose tools based on failure modes, concurrency needs, and operational simplicity**.

That mindset carries over no matter what language I’m using.

---

## Final Thoughts

Elixir didn’t make me a better programmer overnight.  
But it made me a more **intentional system designer**.

I now think more about:
- What happens when this crashes?
- Who restarts what?
- How much state do I really need?
- Can this fail without taking everything else down?

Those questions matter more than syntax.

If you’re curious about Elixir but unsure where it fits, my advice is simple:  
build something small, put it under load, and watch how it behaves.

That’s where it clicks.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
        <item>
            <title><![CDATA[Why I Like Using ReScript for Frontend Work]]></title>
            <link>https://kevinpeterson.me/blogs/why-i-like-using-rescript-for-frontend-work</link>
            <guid>https://kevinpeterson.me/blogs/why-i-like-using-rescript-for-frontend-work</guid>
            <pubDate>Thu, 18 May 2023 00:00:00 GMT</pubDate>
            <description><![CDATA[A practical look at ReScript, strong typing, and why it feels different from typical JavaScript-heavy workflows.]]></description>
            <content:encoded><![CDATA[
I didn’t start using ReScript because I wanted another language.  
I started using it because I wanted fewer frontend bugs without slowing down development.

JavaScript is flexible, but that flexibility often shows up as runtime issues, unclear contracts, and fragile refactors. ReScript pushes many of those problems earlier — at compile time — while still feeling familiar if you’ve worked with React.

---

## What ReScript Actually Is

ReScript is a strongly typed language that compiles to clean, readable JavaScript.

A few important things to know:

- Strong static typing
- Very fast compiler
- Clear, human-readable error messages
- First-class support for React
- Predictable JavaScript output

It doesn’t try to be clever. It tries to be safe and practical.

---

## Types That Actually Help

One of the biggest wins in ReScript is how types guide your thinking instead of getting in your way.

Instead of defensive checks everywhere, you model your data clearly.

```rescript
type status =
  | Loading
  | Success(string)
  | Error(string)
```
Now your UI must handle every possible case:

```rescript
type status =
  | Loading
  | Success(string)
  | Error(string)
```

No forgotten states.
No undefined behavior sneaking into production.

---
## Interop with JavaScript

ReScript doesn’t pretend JavaScript doesn’t exist.

You can call JavaScript libraries directly and incrementally adopt ReScript inside an existing React codebase.

Example binding:
```rescript
@module("uuid")
external v4: unit => string = "v4"
```
That’s all you need.  
No wrappers.  
No ceremony.

---

## Why I Reach for ReScript

I like ReScript when I want:

- Confidence during refactors
- Clear and enforced data models
- Fewer runtime surprises
- Strong typing without heavy syntax

It works especially well for:

- Frontend-heavy applications
- Shared UI libraries
- Teams tired of TypeScript edge cases

---

## When I Wouldn’t Use It

ReScript isn’t always the right tool.

I usually avoid it when:

- The team has no interest in typed functional concepts
- The project is very small or short-lived
- The ecosystem depends heavily on TypeScript-only tooling

Like any language, it works best when the problem fits.

---

## Final Thoughts

ReScript doesn’t try to replace JavaScript culture — it improves it.

You still write React.  
You still ship fast.  
You just catch more problems *before* users do.

For me, that tradeoff is worth it.

If you’re curious, try converting a single component or utility first. That’s usually enough to see whether it clicks.
]]></content:encoded>
            <author>kevinpetersoncr0930@gmail.com (Kevin Peterson)</author>
        </item>
    </channel>
</rss>