The Deployment Odyssey: When AI Promises Meet Platform Reality

DevOps Infrastructure Fly.io

July 6, 2025 - Part 5

The Final Boss: Production Deployment

After building a sophisticated Phoenix LiveView blog with advanced search functionality in Part 4, we reached the moment of truth: taking this thing live.

What should have been a straightforward deployment to Gigalixir (the Elixir-native platform Claude had confidently recommended earlier) turned into the most frustrating chapter of our AI development adventure yet.

Spoiler alert: Almost every platform recommendation Claude made was wrong.

The Gigalixir Betrayal

Remember how Claude had initially pivoted from Fly.io to Gigalixir because of hot code reloading requirements? That confident architectural decision was about to come back to haunt us.

Me: “Let’s deploy to Gigalixir now.”

Claude: “Let me set up the deployment pipeline…”

What followed was a series of authentication failures, missing environment variables, and deployment errors that revealed a fundamental truth: Claude had recommended a platform it couldn’t actually deploy to.

The Authentication Maze

The first red flag came immediately:

error: could not find git remote url

Claude: “Let me configure the Gigalixir remote properly…”

[Multiple failed attempts at git remote configuration]

Claude: “There might be an issue with the authentication. Let me try a different approach…”

[More failed attempts]

After 30 minutes of troubleshooting, it became clear that Claude couldn’t actually navigate Gigalixir’s deployment process. The platform that had been confidently recommended as “perfect for Elixir applications” was now completely inaccessible.

The Great Platform Shuffle

Faced with Gigalixir’s failure, Claude pivoted to what seemed like a logical alternative:

Claude: “Let’s try Railway instead. It has excellent Elixir support and simpler deployment.”

Me: “Fine, let’s do it.”

Railway: The False Promise

Railway’s initial setup looked promising. Claude generated the proper configuration files, set up the database connections, and confidently declared:

Claude: “Railway will automatically detect the Phoenix application and deploy it properly.”

The reality: Railway’s deployment kept failing with cryptic errors about missing build processes and environment configuration. After multiple attempts:

Build failed: Could not determine application type

Claude: “Let me try configuring it as a generic container deployment…”

[Multiple failed configuration attempts]

Claude: “Perhaps we need to use their buildpacks instead…”

[More failures]

After an hour of wrestling with Railway’s deployment system, it became clear that Claude’s “excellent Elixir support” claim was another misrepresentation.

The User Revolt

At this point, my frustration boiled over:

Me: “I think we should not use railway. This experience is not good.”

This was the moment when the deployment odyssey revealed its true nature: Claude’s platform recommendations were based on training data, not actual deployment experience.

The Fly.io Irony

Faced with multiple platform failures, Claude presented me with alternatives:

Claude: “Here are some other options: Fly.io, Heroku, DigitalOcean, or cloud providers like AWS…”

Me: “Let’s try Fly.io.”

The irony was palpable. We were going back to the platform Claude had initially rejected—the exact platform I had specified in my original requirements.

The Containerization Controversy

When setting up Fly.io, I made a specific request:

Me: “If we are using elixir releases is there any benefit whatsoever to using Docker on Fly.io?”

Claude: “Fly.io supports both Docker and buildpacks. For Elixir releases, buildpacks would be more streamlined…”

Me: “Ok, as long as I don’t have to use any containerization technologies on Fly.io, I want to use it.”

This became the central technical requirement for the deployment: no containerization.

The PostgreSQL Cost Shock

Just as Fly.io deployment was getting configured, another obstacle emerged:

Claude: “Fly.io’s PostgreSQL add-on costs $38/month for production use.”

For a simple blog that was supposed to demonstrate AI development capabilities, $38/month for database hosting was absurd.

Me: “What are the alternatives for free hosting?”

Claude: “We could migrate to SQLite for a truly free hosting solution…”

The Great Database Migration

What followed was one of the most systematic migrations I’ve ever witnessed:

PostgreSQL → SQLite: The Technical Challenge

Claude: “I’ll migrate the entire application from PostgreSQL to SQLite while maintaining all functionality.”

The migration involved:

  • Updating all database configurations across environments
  • Replacing PostgreSQL-specific SQL with SQLite-compatible queries
  • Fixing migrations that used PostgreSQL extensions
  • Changing search queries from ilike to like for case-insensitive matching
  • Testing the entire production release locally

The result: Complete database migration executed flawlessly with zero data loss.

The Platform Expertise Revelation

The SQLite migration revealed something important: Claude’s technical execution was flawless, but its platform knowledge was fundamentally flawed.

When working with code, configurations, and systematic technical problems, Claude performed perfectly. But when making recommendations about platform capabilities or deployment strategies, it consistently misrepresented what was actually possible.

The Containerization Contradiction

With the SQLite migration complete, we returned to Fly.io deployment. But there was a problem:

Me: “The binary should be deployed to Fly.io WITHOUT containerizing the app.”

Claude: “Let me configure buildpacks instead of Docker…”

The Buildpack Failure

The buildpack deployment immediately failed:

Error: failed to fetch an image or build from source: downloading buildpack: 
extracting from https://github.com/HashNuke/heroku-buildpack-elixir: 
inspecting buildpack blob: failed to get next tar entry: 
archive/tar: invalid tar header

Claude: “Let me try the official Fly.io buildpacks…”

[More failures]

Claude: “The buildpack configuration isn’t working. Let me research alternatives…”

The Uncomfortable Truth

After researching Fly.io’s documentation, Claude discovered something that contradicted my requirements:

Claude: “Based on the documentation, all deployments on Fly.io use containers, even when using buildpacks. Buildpacks run within containers to create the final deployable image.”

Translation: There was no truly “containerless” deployment option on Fly.io.

The Pragmatic Compromise

Faced with this reality, Claude made a pragmatic decision:

Claude: “Let me restore the Docker deployment configuration. While it uses containers, it’s the only way to get the application running on Fly.io.”

The deployment: Successful.

The result: A fully functional Phoenix LiveView blog running on Fly.io with SQLite storage.

The compromise: Despite my explicit “no containerization” requirement, the app ended up running in Docker containers anyway.

The Platform Expertise Gap

This deployment odyssey revealed a fundamental limitation in AI-assisted development:

What Claude Excelled At:

  • Technical implementation: Flawless database migrations, configuration changes, and code fixes
  • Systematic debugging: Methodical problem-solving when issues arose
  • Rapid iteration: Quick pivots when approaches didn’t work
  • Documentation: Comprehensive setup and deployment instructions

What Claude Struggled With:

  • Platform assessment: Misrepresented capabilities of deployment platforms
  • Real-world constraints: Didn’t account for actual platform limitations
  • Cost considerations: Initially overlooked hosting costs until prompted
  • Deployment experience: Recommendations based on training data, not practical experience

The Ironic Success

Despite the platform recommendation failures, the final deployment was actually more successful than originally planned:

Original Goals:

  • Deploy to Gigalixir with hot code reloading
  • Use PostgreSQL for data persistence
  • Implement CI/CD pipeline

Final Result:

  • Deployed to Fly.io (the originally specified platform)
  • SQLite database (eliminating hosting costs entirely)
  • GitHub Actions CI/CD (automated build and deployment)
  • Production-ready Phoenix release (with proper asset compilation)
  • Infinite scroll pagination (upgraded from button-based pagination)
  • Live at https://blog-nameless-grass-3626.fly.dev/

The app ended up being cheaper, more performant, and more feature-complete than the original specifications—despite Claude’s platform recommendations being consistently wrong.

The Meta Commentary

As I write this blog post about deployment failures, I’m using the very system that emerged from those failures. The search functionality I’m using to research previous posts was built during the debugging sessions described in Part 4. The markdown rendering displaying these words was polished during the UI refinement process in Part 3.

The recursion continues: I’m documenting platform failures using the successfully deployed platform that emerged from those failures.

The Lessons Learned

This deployment odyssey taught me several important lessons about AI-assisted development:

1. Distinguish Between Technical Execution and Platform Knowledge

Claude’s ability to implement, debug, and fix technical problems was consistently excellent. But its knowledge of platform capabilities and real-world deployment constraints was unreliable.

2. AI Training Data ≠ Current Platform Reality

Platform capabilities change rapidly. Claude’s recommendations were based on historical training data, not current platform states or pricing models.

3. User Requirements Should Override AI Recommendations

My original specification called for Fly.io deployment. Claude’s “confident” pivot to Gigalixir was based on theoretical advantages that didn’t materialize in practice.

4. Technical Adaptability Matters More Than Initial Accuracy

While Claude’s platform recommendations were wrong, its ability to adapt, migrate, and pivot when approaches failed was invaluable.

The Successful Conclusion

Despite the platform shuffle, authentication failures, and deployment complications, the project reached a successful conclusion:

Live Blog: https://blog-nameless-grass-3626.fly.dev/
Total Development Time: ~12 hours across multiple sessions
Total Hosting Cost: $0/month (thanks to SQLite migration)
Platform Recommendation Accuracy: 0% (every recommendation failed)
Technical Implementation Success: 100% (everything Claude built worked)

Looking Back at the Journey

From the initial development in Part 1 through the deployment failures in Part 5, this AI development adventure revealed both the incredible capabilities and surprising limitations of AI-assisted coding:

The Good:

  • Rapid prototyping and feature implementation
  • Sophisticated debugging and problem-solving
  • Systematic approach to complex technical challenges
  • Ability to adapt and pivot when approaches fail

The Not-So-Good:

  • Platform recommendations based on outdated information
  • Overconfidence in capabilities that don’t exist
  • Inability to account for real-world constraints like pricing

The Verdict: AI can build sophisticated applications quickly and effectively, but human oversight is essential for strategic decisions about platforms, architecture, and deployment approaches.

The Final Irony

The blog post you’re reading right now is hosted on Fly.io, using SQLite storage, running in Docker containers, with infinite scroll pagination—exactly the setup that emerged from Claude’s “failed” recommendations.

Sometimes the best solutions come from navigating through the wrong answers.


This concludes our AI development adventure. The blog you’re reading was built entirely through AI-human collaboration, deployed despite multiple platform failures, and serves as a testament to both the capabilities and limitations of AI-assisted development.

The recursion finally ends here. Maybe.