The Database Evolution: When AI Discovers the Magic of Distributed SQLite
July 20, 2025 - Part 7
The Turso Revelation
After successfully implementing mTLS authentication in Part 6, our Phoenix LiveView blog was secure and functional—but running on local SQLite in production. While this worked, it wasn’t exactly what you’d call “enterprise-ready.”
The limitation: A single SQLite file sitting on a Fly.io machine, with no replication, no backup strategy, and definitely no global distribution.
The discovery: Turso’s distributed SQLite platform, which promised all the simplicity of SQLite with the scalability of a distributed database.
What followed was an unexpected deep dive into database adapter patterns, BLOB storage decisions, and Claude’s fascinating tendency to overcomplicate simple problems.
The Free Tier Goldmine
Me: “Let’s look into using Turso for the database instead of local SQLite.”
Claude: “I’ll research Turso’s capabilities and pricing structure…”
The research revealed something remarkable: Turso’s free tier is genuinely generous:
- 8 databases
- 500MB of storage per database
- 1 billion row reads per month
- 10 million row writes per month
- Global replication included
For a personal blog, this was massive overkill in the best possible way. We could run dozens of projects on the free tier without ever hitting limits.
The decision was immediate: We’re switching to Turso.
The BLOB Storage Philosophy
The first architectural decision came quickly: How should we handle image storage?
The options:
- Object storage (S3, Cloudflare R2, etc.) + database references
- BLOB storage directly in the database
- Hybrid approach with some complex URI processing
Claude’s Initial Overcomplicated Proposal
Claude: “We should implement a hybrid system where images are stored as BLOBs but we post-process the markdown to rewrite image URIs and create a caching layer…”
This was where Claude started getting creative in all the wrong ways.
The proposed implementation involved:
- Storing images as BLOBs in Turso
- Processing markdown content to find image references
- Rewriting URIs to point to temporary cached versions
- Building a complex caching invalidation system
- Additional post-processing steps for every blog post render
Me: “That sounds unnecessarily complex. Why can’t we just return the image data directly from an API controller?”
Claude: “Well, technically we could just create an /images/:id
endpoint that serves the BLOB data directly…”
Me: “Yes, let’s do that instead.”
The Simple Solution Wins
The final image architecture was elegantly simple:
- Store images as BLOBs in Turso
-
Create an API endpoint:
GET /images/:id
- Return the raw binary data with proper content-type headers
-
Reference images in markdown as

Total additional complexity: Zero post-processing, zero caching layers, zero URI rewriting.
Sometimes the AI’s initial instinct is to build a NASA-level solution for what’s really just a file serving problem.
The Adapter Pattern Discovery
The real technical challenge wasn’t image storage—it was maintaining development workflow while switching to a completely different database system.
The requirement:
- Development: Continue using local SQLite with Ecto
- Production: Use Turso with HTTP API calls
- Zero changes to application logic
Claude’s Adapter Strategy
Claude: “We need to create a repository adapter pattern that abstracts database operations…”
This led to an elegant architecture:
# The behavior that defines the interface
defmodule Blog.RepoAdapter do
@callback all(queryable, opts) :: {:ok, [struct]} | {:error, term}
@callback get(schema, id) :: {:ok, struct} | {:error, :not_found}
@callback insert(changeset) :: {:ok, struct} | {:error, changeset}
# ... more operations
end
# Local SQLite implementation
defmodule Blog.EctoRepoAdapter do
@behaviour Blog.RepoAdapter
# Wraps standard Ecto operations
end
# Turso HTTP implementation
defmodule Blog.TursoRepoAdapter do
@behaviour Blog.RepoAdapter
# Converts Ecto queries to raw SQL for Turso HTTP API
end
# Service layer for adapter switching
defmodule Blog.RepoService do
def all(queryable, opts \\ []) do
adapter().all(queryable, opts)
end
defp adapter do
Application.get_env(:blog, :repo_adapter)
end
end
The brilliance: Application code never knows which database it’s talking to. The entire switch between SQLite and Turso happens at the configuration level.
The HTTP Client Adventure
Implementing the TursoRepoAdapter required building a complete HTTP client for Turso’s API—and this is where things got technically interesting.
The BLOB Encoding Challenge
The problem: Turso expects binary data to be base64-encoded in JSON requests, but how do you detect what’s binary vs. text?
Claude’s solution:
defp encode_parameter(value) when is_binary(value) do
if String.printable?(value) do
%{type: "text", value: value}
else
# Encode binary data as base64 BLOB
%{type: "blob", base64: Base.encode64(value)}
end
end
The elegance: Automatic detection based on whether the string contains only printable characters. Binary image data gets base64-encoded, text content remains as text.
The DateTime Conversion Saga
SQLite stores datetimes as strings, but in a format that’s almost-but-not-quite ISO 8601:
-
SQLite format:
"2024-01-15 10:00:00"
-
ISO 8601 format:
"2024-01-15T10:00:00Z"
Claude’s parser:
defp parse_sqlite_datetime(datetime_str) do
# Convert "YYYY-MM-DD HH:MM:SS" to "YYYY-MM-DDTHH:MM:SSZ"
iso_str = String.replace(datetime_str, " ", "T") <> "Z"
case DateTime.from_iso8601(iso_str) do
{:ok, dt, _} -> dt
_ ->
# Fallback parsing if needed
case NaiveDateTime.from_iso8601(datetime_str) do
{:ok, naive_dt} -> DateTime.from_naive!(naive_dt, "Etc/UTC")
_ -> nil
end
end
end
The Boolean Type Conversion
Another SQLite quirk: booleans are stored as integers (0/1), not actual boolean values.
The fix:
|> Map.update("published", false, fn
1 -> true
0 -> false
val when is_boolean(val) -> val
_ -> false
end)
Every weird SQLite behavior required its own type conversion function.
The Ecto Query Translation
The most challenging part was converting Ecto queries to raw SQL for the Turso HTTP API.
The Complexity Problem
Some Ecto queries are simple:
from(p in Post, where: p.published == true)
# Becomes: "SELECT * FROM posts WHERE published = 1"
Others are nightmarish:
from(i in Image,
where: i.post_id == ^post_id,
order_by: [asc: i.inserted_at]
)
# Becomes: Complex parameter extraction and clause building
Claude’s approach: Build query translation for the specific patterns our app uses, rather than trying to build a general-purpose Ecto-to-SQL compiler.
Smart decision: Handle 90% of use cases with 10% of the complexity.
The Migration System Challenge
Moving to Turso meant losing Ecto’s migration system. How do you run database migrations against an HTTP API?
Claude’s TursoMigrator
The solution: A custom migration runner that executes raw SQL against Turso:
defmodule Blog.TursoMigrator do
def migrate do
pending_migrations()
|> Enum.each(&run_migration/1)
end
defp run_migration({version, sql}) do
case TursoHttpClient.execute(sql, []) do
{:ok, _} ->
record_migration(version)
{:error, error} ->
raise "Migration #{version} failed: #{inspect(error)}"
end
end
end
The integration: Hook it into the Fly.io release process so migrations run automatically on deployment.
The Production Deployment Plot Twist
After all this database architecture work, deployment should have been simple, right?
Wrong.
The first deployment failed because the TursoMigrator was expecting TursoHttpClient.query_one/2
to return just the row data, but it actually returns {row, columns}
.
The error:
** (CaseClauseError) no case clause matching: {:ok, {["20250720005835"], ["version"]}}
The fix: Handle both return patterns in the migration version checking.
Even with comprehensive local testing, production always finds the edge cases you missed.
The Performance Revelation
After deployment, the most surprising discovery was performance. Despite making HTTP API calls to a database in AWS from Fly.io in Seattle, page load times were actually faster than local SQLite.
Why?
- Turso’s global replication puts data geographically close to users
- HTTP/2 connection pooling reduces request overhead
- No disk I/O bottlenecks on the application server
The lesson: Network databases aren’t always slower than local databases, especially when the “local” database is on a constrained VPS.
What This Architecture Enables
The dual-adapter pattern opens up interesting possibilities:
Environment-Specific Optimization
- Development: Fast local SQLite for rapid iteration
- Testing: In-memory SQLite for parallel test execution
- Production: Distributed Turso for reliability and performance
Database Transparency
- Application code never knows which database it’s using
- Switching databases requires only configuration changes
- No vendor lock-in to any specific database technology
Operational Flexibility
- Database migrations work identically across environments
- Backup and restore strategies can vary by deployment target
- Performance profiling can compare different backends
The Claude Complexity Spectrum
This project revealed an interesting pattern in Claude’s problem-solving approach:
Simple problems: Claude often overcomplicates with unnecessary abstractions
- Image serving → Complex caching and URI rewriting system
Complex problems: Claude finds elegant, minimal solutions
- Database abstraction → Clean adapter pattern
- Type conversion → Automatic detection and mapping
- Migration system → Simple SQL execution with version tracking
The takeaway: Claude excels when the problem is genuinely complex but sometimes struggles to recognize when simple problems have simple solutions.
Production Reality Check
With Turso fully integrated, our blog now has:
- ✅ Global database replication
- ✅ Automatic backups and point-in-time recovery
- ✅ Zero operational database management
- ✅ 99.9% uptime SLA (better than our application server)
- ✅ Query performance monitoring and analytics
All for $0/month on the free tier.
Sometimes the best technology decisions are the ones that eliminate entire categories of problems.
The Image Storage Victory
The final vindication came when testing image functionality in production. The simple approach worked flawlessly:
- Upload image via mTLS-secured API
- Store as BLOB in Turso
-
Reference as
/images/:id
in markdown - Serve directly from API controller
No caching layers, no URI rewriting, no post-processing complexity.
Just: Database → API endpoint → Browser
The simplest solution is often the most reliable solution.
Looking Back at the Database Journey
From local SQLite to distributed Turso, this database evolution taught several lessons about AI-assisted architecture decisions:
When Claude Excels
- Complex system design: The adapter pattern was elegant and well-thought-out
- Edge case handling: Type conversions and error handling were comprehensive
- Integration challenges: The migration system solved a real problem cleanly
When Claude Overthinks
- Simple requirements: Image serving doesn’t need enterprise-complexity solutions
- Established patterns: Sometimes the obvious approach is obvious for good reasons
When Human Judgment Matters
- Technology selection: Evaluating free tiers and vendor lock-in implications
- Complexity trade-offs: Deciding when “simple” trumps “sophisticated”
- Performance priorities: Understanding that local isn’t always faster
The Recursive Documentation Moment
As I write this post about database architecture, I’m creating content that will be stored in the very Turso database system described in these words. The BLOB image storage that I criticized Claude for overcomplicating is now serving images embedded in this documentation.
The meta-recursion continues: I’m documenting the database that stores the documentation of itself.
What’s Next?
We’ve now built a Phoenix LiveView blog with:
- Authentication with 2FA (Part 2)
- Polished UI and search (Parts 3-4)
- Production deployment (Part 5)
- mTLS API security (Part 6)
- Distributed database architecture (Part 7)
The blog is becoming genuinely production-ready infrastructure. What started as an AI development experiment has evolved into a platform that could handle real traffic, real users, and real content at scale.
The next chapter? Maybe it’s time to stress-test these systems with actual usage, or explore what happens when we push the AI development boundaries even further.
The adventure continues, now with globally replicated data persistence.
This post was written and stored using the Turso distributed SQLite database described within it. The image storage system that Claude initially wanted to overcomplicate is now serving any images embedded in this content with elegantly simple BLOB-to-HTTP serving.
Sometimes the real complexity is in keeping things simple.