I dislike those black and white takes a lot. It's absolutely true that most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You don't actually need any of those things until you no longer have a "project", but a business which will allow you to pay for the things you require.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
On this site, I've seen these kind of takes repeatedly over the past years, so I went ahead and built a little forum that consists of a single Rust binary and SQLite. The binary runs on a Mac Mini in my bedroom with Cloudflare tunnels. I get continuous backups with Litestream, and testing backups is as trivial as running `litestream restore` on my development machine and then running the binary.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
12 TB fits entirely into the RAM of a 2U server (cf. Dell PowerEdge R840).
However, I think there's an implicit point in TFA; namely, that your personal and side projects are not scaling to a 12 TB database.
With that said, I do manage approximately 14 TB of storage in a RAIDZ2 at my home, for "Linux ISOs". The I/O performance is "good enough" for streaming video and BitTorrent seeding.
However, I am not sure what your latency requirements and access patterns are. If you are mostly reading from the 12 TB database and don't have specific latency requirements on writes, then I don't see why the cloud is a hard requirement? To the contrary, most cloud providers provide remarkably low IOPS in their block storage offerings. Here is an example of Oracle Cloud's block storage for 12 TB:
Those are the kind of numbers I would expect of a budget SATA SSD, not "NVMe-based storage infrastructure". Additionally, the cost for 12 TB in this storage class is ~$500/mo. That's roughly the cost of two 14 TB hard drives in a mirror vdev on ZFS (not that this is a good idea btw).
This leads me to guess most people will prefer a managed database offering rather than deploying their own database on top of a cloud provider's block storage. But 12 TB of data in the gp3 storage class of RDS costs about $1,400/mo. That is already triple the cost of the NAS in my bedroom.
Lastly, backing up 12 TB to Backblaze B2 is about $180/mo. Given that this database is for your dev environment, I am assuming that backup requirements are simple (i.e. 1 off-site backup).
The key point, however, is that most people's side projects are unlikely to scale to a 12 TB dev environment database.
Once you're at that scale, sure, consider the cloud. But even at the largest company I worked at, a 14 TB hard drive was enough storage (and IOPS) for on-prem installs of the product. The product was an NLP-based application that automated due diligence for M&As. The storage costs were mostly full-text search indices on collections of tens of thousands of legal documents, each document could span hundreds to thousands of pages. The backups were as simple as having a second 14 TB hard drive around and periodically checking the data isn't corrupt.
Still missing the point. This is just one server and just in the dev enviornment?
How many pets do you want to be tending to? I have 10^5 servers I'm responsible for...
The quantity and methods the cloud affords me allow me to operate the same infrastructure with 1/10th as much labor.
At the extreme ends of scale this isn't a benefit, but for large companies in the middle this is the only move that makes any sense.
99% of posts I read talking about how easy and cheap it is to be in the datacenter all have a single digit number of racks worth of stuff. Often far less.
We operate physical datacenters as well. We spend multiple millions in the cloud per month. We just moved another full datacenter into the cloud and the difference in cost between the two is less than $50k/year. Running in physical DCs is really inefficient for us for a long of annoying and insurmountable reasons. And we no longer have to deal with procurement and vendor management. My engineers can focus their energy on more valuable things.
After having worked at several startups using AWS, I don't really buy the "pets vs cattle" argument, because the "cattle" turn out to be "pets" that require maintenance of a large collection of Terraform configuration, Helm charts (or Kustomize manifests), and we had to do things differently depending on whether you were on dev, staging, or prod environment. This is ignoring how ad-hoc and "pet-like" the "cattle" can become when a codebase needs a long-running feature branch and a special environment provisioned for testing the feature.
It was really telling when I had to help a frontend developer debug his local Kubernetes cluster and he randomly said "I don't get why we need so much configuration. I thought the point of Docker was that you just pull the image and run it". So yeah, we had backend developers (and a devops guy) going all-in on these "cattle", but they sure behaved like "pets" in every single aspect :)
So, to answer your question: "How many pets do you want to be tending to?" I certainly don't want pets that speak HCL unofficially (and the official path is some loose YAML schema whose permissions configuration breaks over time). It's questionable how much time one really is saving in this situation.
In fact, I think "How many pets do you want to be tending to?" is an inaccurate question; the more accurate question to ask is: "what do you want your day-2 maintenance to look like?" Each technology has its own "day 2 maintenance". The "day 2 maintenance" of e.g. SQS, is certainly a lot different compared to, for instance, using PostgreSQL's SKIP LOCKED as a queue. Most backend developers have experience with Postgres and its tooling; can't say the same about SQS (and its myriad of mocks for local development).
The calculus changes depending on scale -- in fact, not even Kubernetes can scale past 10^3 nodes[0] -- but let's also be honest: how many cloud users are managing so many servers (10^5 is huge!) that not even Kubernetes can handle it?
> But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
Agreed.
These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
Figuring out how to do db backups _can_ also be fairly time consuming.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
Hmmm, I think you have to figure out how to do your database backups anyway as trying to get a restorable backup out of RDS to use on another provider seems to be a difficult task.
Backups that are stored with the same provider are good, providing the provider is reliable as a whole.
(Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.")
> Figuring out how to do db backups _can_ also be fairly time consuming.
apt install automysqlbackup autopostgresqlbackup
Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots.
And making sure you're not making a security configuration mistake that will accidentally leak private data to the open internet because of a detail of AWS you were unaware of.
any serious business will(might?) have hundreds of Tbs of data. I store that in our DC and with a 2nd DC backup for about 1/10 the price of what it would cost in S3.
In my case we have a B2B SaaS where access patterns are occasional, revenue per customer is high, general server load is low. Cloud bills just don’t spike much. Labor is 100x the cost of our servers so saving a piddly amount of money on server costs while taking on even just a fraction of one technical employee’s worth of labor costs makes no sense.
I don’t feel like anything really changed? Fairly certain the prices haven’t changed. It’s honestly been pleasantly stable. I figured I’d have to move after a few months, but we’re a few years into the acquisition and everything still works.
Akamai has some really good infrastructure, and an extremely competent global cdn and interconnects. I was skeptical when linode was acquired, but I value their top-tier peering and decent DDoS mitigation which is rolled into the cost.
Guess you came for the hot take without actually using the service or participating in any intelligent conversation. All the sibling comments observe that nothing you are talking about happened.
Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all.
It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
AWS has a bunch of startup credits you can use, if you're smart.
But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.
Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!
LEVEL 2:
And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.
PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.
LEVEL 3:
Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.
I'm fully aware this is pedantic, but you can't save 10x. You can pay 1/10. You can save 90%. Your previous costs could have been 10x your current costs. But 10x is more by definition, not less. You can't save it.
In English, x or time(s) after a number marks a "unit" used by various verbs. A 10x increase. Increase by 10x. Go up 10x. Some of these verbs are negative like decrease or save. "Save 10x" is the same as "divide by 10". Four times less, 5 times smaller etc. are long attested.
No, x literally means multiply. It doesn't somehow also mean divide. They should use the percent sign, it's what it is for. 10x my costs means 10 x mycost, it's literally an equation
It's just inversion, like 2 to the power of 2 or 2 to the power of negative 2. These negative words inverse it just the same. You may dislike it, but millions of people have spoken this way for a long time.
> x literally means multiply
And some use the dot operator or even 2(3) or (2)(3). When programming, we tend to use *.
hmm.. if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster, or would you have the same quibble there too?
Edit: Thinking about this some more: You could say you are saving 9x [of the new cost], and it would be a correct statement. I believe the error is assuming the reference frame is the previous cost vs the new cost, but since it is not specified, it could be either.
> if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster
Yes you can, because speed has units of inverse time and latency has units of time. So it could be correct to say that cutting latency to 1/10 of its original value is equivalent to making it 10x the original speed - that's how inverses work.
Savings are not, to my knowledge, measured in units of inverse dollars.
People commonly use this expression in everyday conversation, such as, "you could save 10 times as much if you would just shop at Costco." So I agree with OP, their comment is correct but pedantic.
Cost of item = 10
First discounted cost of item = 9
=> First saving = 1
Second discounted cost of item = 6
=> Second saving = 4
Second saving is 4x first saving.
(Edit - formatting)
But that's 4x the savings compared to another saving. I suppose you've upped the pedantry and are technically correct, but that's a pretty narrow use case and not the one used in the article.
Consider it as getting 10x the resources for the same price - that is, the resource-to-price ratio is 10x. Except you don't need 10x the resources so you choose to get 1x the resources for 0.1x the price instead.
The author touches on it briefly, but I'd argue that the cloud is immensely helpful for building (and tearing down) an MVP or proving an early market for a new company using startup credits or free tiers offered by all vendors. Once a business model has been proven, individual components and the underlying infrastructure can be moved out of the cloud as soon as cost becomes a concern.
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
Seems like nowadays people seem less concerned with vendor lockin than they were 15 years ago. One of the reason to want to avoid lockin is to be able to move when the price gouging gets just a little bit too greedy that the move is worth the cost. One of the drawbacks of all these built in services at AWS is the expense of trying to recreate the architecture elsewhere.
> This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
I'd be more interested to understand (from folk who were there) what the conditions were that made AWS et al such a runaway hit. What did folks gain, and have those conditions meaningfully changed in some way that makes it less of a slam dunk?
My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]).
AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus?
You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs.
In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware).
You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you.
I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days.
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be.
I find it a blissful way to work.
True, but I think you're touching on something important regarding value. Value is different depending on the consumer: for you, you're willing and able to manage more of the infrastructure than someone who has a more narrow skillset.
Being able to move the responsibility for areas of the service on to the provider is what we're paying for, and for some, paying more money to offload more of the responsibility actually results in more value for the organization/consumer
AWS also made huge inroads in big companies because engineering teams could run their own account off of their budget and didn’t have to go through to IT to requisition servers, which was often red tape hell. In my experience it was just as much about internal politics as the technical benefits.
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
Absolutely. At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card. With AWS, leadership suddenly accepted that Ops/Dev could provision what we thought was right. It isn’t logically compelling, but that’s why the cloud gained traction so quickly: it removed friction.
> At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card.
That's not a startup if you can't go straight to the founder and get a definite yes/no answer in a few minutes.
Computing power (compute, memory, storage) has increased 100x or more since 2005, but AWS prices are not proportionally cheaper. So where you were getting a reasonable value in ~2012, that value is no longer reasonable, and by an increasing margin.
In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware.
Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware.
What was a great deal in 2006 is a terrible one today.
The free credits... what a WILD time! Just show up to a hackathon booth, ask nicely, and you'd get months/years worth of "startup level" credits. Nothing super powerful - basically the equivalent of a few quad core boxes in a broom closet. But still for "free".
> But maybe things have changed in ways that meaningfully changes the calculus?
I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS.
Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain.
> although actually many people on here are American so I guess for you aws is legally a person...
Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law.
What is unique in the US is the interaction between corporate personhood and our First Amendment and the way that our courts have applied that to limit political campaign finance laws, and a lot of “corporate personhood” controversy is really about that, not actually about corporate personhood as a broad concept.
People also get confused about the Citizens United ruling. It had nothing to do with corporate personhood.
The ruling said that since a person has first amendment rights, those same rights extend to a group of people—any group—whether it’s a non profit organization, a corporation, or something else.
I don't think that's a hard and fast rule? I think et al is for named, specific entities of any kind. You might say "palm trees, evergreens trees, etc" but "General Sherman, Grand Oak, et al"
Is your time worth more than what the fully managed services on AWS cost? And I mean that quite literally in the sense of your billable hours.
If you like spending time tinkering with manually configuring linux servers, and don't have anything else that generates better value for you to do - by all means go for it.
For small pet projects with no customers, a cheap Hetzner box is probably fine. For serious projects with customers that expect you to be able to get back on your feet quickly when shit hits the fan, or teams of developers that lose hours of productivity when a sandbox environment goes down, maybe not so much.
Is AWS more expensive than the salary of the infra guy you would need to hire to do all this stuff by hand? Probably not.
Its not actually that hard to get your own server racked up in a data centre, I have done it. Since it was only one box that I built and installed at home I just shipped it and they racked it in the shared area and plugged the power and network in and gave me the IP address. It was cheaper than renting from someone like hetzner, was about £15 a month at the time for 1A and 5TB a month of traffic at 1gbps. Also had a one off install fee of £75.
At the time I did this no one had good gaming CPUs in the cloud, they are still a bit rare really especially in VPS offerings and I was hosting a gaming community and server. So I made a pair of machines in a 1U with dual machines in there and had a public and private server with raid 1 drives on both and redundant power. Ran that for a gaming server for many years until it was obsolete. It wasn't difficult and I think the machine was about £1200 in all, which for 2 computers running game servers wasn't too terrible.
I didn't do this because it was necessarily cheaper, I did it because I couldn't find a cloud server to rent with a high clockspeed CPU in it. I tested numerous cloud providers, sent emails asking for specs and after months of chasing it down I didn't feel like I had much choice. Turned out to be quite easy and over the years it saved a fortune.
"Remote hands" is the DC term for exactly what it sounds like. You write a list of instructions and someone hired by the DC will go over to your rack and do the thing.
the problem isn't setup. its maintaining it. Its not an easy job to do that some times. Im not trying to dissuade people from running there own servers, but its something to consider.
Cheap shot maybe, but the fact that the page takes 10 seconds to load when it hits the HN front page is a great, inadvertant illustration of why you might want to use the cloud sometimes.
The failure mode of self-hosting is that your site gets hugged to death, the failure mode of the cloud is that you lose a ton of money. For a blog that doesn't earn you anything, the choice is clear.
Besides, you can just put it behind cloudflare for free.
OP here. As others have said, it loads immediately for me (tested on desktop + on mobile data + incognito)
The entire site is cached + Cloudflare sits on top of everything. I just ran a couple performance tests under the current HN traffic (~120 concurrent visitors) and everything looks good, all loads under 1 second. The server is quite happy at an average load of 0.06 right now, not even close to start breaking a sweat.
Turns out you can get off the cloud and hit the frontpage of HN and your site will be alright.
100% true, I've hit the front page of hn on a server with an old i5 (aka consumer hardware, and not even high end) with no cloudflare or similar caching, and had no problems. Computers are fast, and serving static html over https is a solved problem.
That happens a lot to blogs deployed on the cloud too. They just need to put a small cache in front and they'll be able to serve one or two orders of magnitude more requests per second.
This isn’t a binary issue. I disagree with these “abandon the cloud” takes but do agree that most folks spend way way more than they should.
The biggest threat to cloud vendors is that everyone wakes up tomorrow and cost optimizes the crap out of their infrastructure. I don’t think it’s hyperbolic to say that global cloud spending could drop by 50% in 3 months if everyone just did a good audit and cleaned up their deployments.
> The biggest threat to cloud vendors is that everyone wakes up tomorrow and cost optimizes the crap out of their infrastructure
Well there's no danger of that. Even with AWS telling you exactly how to save money (they have a thousand different dashboards showing you where you can save, and even support will tell you), it'll still take you months to work through all the cost optimization changes. Since it's annoying and complicated to do, most people won't do it.
Their billing really is ridiculous. We have a TAM and use a reseller, and it's virtually impossible for us to see what we actually spend each month, what with the reseller discounts, enterprise PPA, savings plans, RIs, and multiple accounts. Their support reps even built us some kinda custom BI tool just to look at costs, and it's still not right.
We have to use cloud because we're at the low end of 10^5 servers. Once you hit the high end of 10^3 this is really where you need to be.
Everything we're doing is highly inefficient because of decades of fast and loose development by immature software engineers...and having components in the stack that did the same.
If I had 5 years to rewrite and redesign the product to reflect today's computing reality, I could eliminate 90%+ of the cost. But I'll never get that kind of time. Not with 1000 more engineers and 1000 more years and the most willing customers.
You might get lucky enough that you and a bunch of your customers are so fed up with your company that you get to create the competition.
"To them, it’s way too convenient to be on AWS: not only it solves their problem, but it’s also a shiny object. It’s technically complex, it makes them look smart in front of other devs, "
why? why be so obnoxious to other people who you claim are being obnoxious to you. no need to read your blog post now/
I think the main thing holding people back from leaving the cloud is simple inertia. There was a time when the cloud was obviously the right choice. Static IPv4 addresses were becoming scarce, IPv6 had not been deployed widely enough, and cloud providers made it easy to stand up a server and some storage with high speed links on the cheap. But over time, things have changed. Rate limits, data caps, and egress fees are now normal (and costly). IPv6 is now deployed widely enough that you might be willing to just run an IPv6-only stack, which greatly simplifies running a server on-premise. And of course, we've all seen time and again how providers will carelessly lock out your cloud account for arbitrary reasons with little to no recourse. The time has come to own your infrastructure again. But that won't happen until people realize it's easy to do.
The first couple of paragraphs of price comparisons are useful. Then there are many paragraphs of sheer waffle. The author doesn't even seem able to define what "the cloud" is:
> The whole debate of “is this still the cloud or not” is nonsense to me. You’re just getting lost in naming conventions. VPS, bare metal, on-prem, colo, who cares what you call it. You need to put your servers somewhere. Sure, have a computer running in your mom’s basement if that makes you feel like you’re exiting the cloud more, I’ll have mine in a datacenter and both will be happy.
I read the whole thing and I didn't see any waffle. Sure, undeniably some excess word count, some emotion in responding to critics. But no waffle.
The "is this cloud or not" debate in the piece makes perfect sense. Who cares whether Hetzner is defined as "the cloud" or not? The point is, he left AWS without going to Azure or some other obvious cloud vendor. He took a step towards more hands on management. And he saved a ton of money.
The cheap hosting service they switched to is arguably "cloud".
If you can't drive to the location where your stuff is running, and then enter the building blindfolded, yet put your hands on the correct machine, then it's cloud.
It’s entirely possible for a rented server to host a site that gets millions of views. It’s also entirely possible to make an AWS setup that chokes with 100.
I've always found AWS IAM quite simple, but then again it is my job, so I might be biased. I haven't really dug into GCP well enough to understand it, but I did find it quite daunting to start the few times I messed with it. What's complex about it to you?
For personal projects, honestly, the built in roles AWS provides are okay enough for some semblance of least privilege x functionality IMO.
Plus, most of AWS's documentation tells you the specific policy JSON to use if you need to do XYZ thing, just fill in the blanks.
I have a VPS. It costs me £1.34 per month. It's way over-powered for what I need it for.
However, one situation where I think the cloud might be useful is for archive storage. I did a comparison between AWS Glacier Deep Storage and local many-hard-drive boxes, for storing PB-scale backups, and AWS just squeaked in as slightly cheaper, but only because you only pay for the amount you use, whereas if you buy a box then you have to pay for the unused space. And it's off-site, which is a resilience advantage. And the defrosting/downloading charge was acceptable at effectively 2.5 months worth of storage. However, at smaller scales you would probably win with a small NAS, and at larger scales you'd be able to set up a tape library and fairly comprehensively beat AWS for price.
Its a weird service because before that point AWS is crazy expensive for storage, especially down in the TB range its awful value compared to your box and drives. But once you get into that PB scale AWS actually seems to be competitive, I guess because the GB/TBs they are selling are from PB scale solutions and all the overhead that entails.
I've been at too many startups with a devops team that would rather provision 15 machines with 4GB RAM THAN ONE WITH 64GB.
I once got into an argument with a lead architect about it and it's really easy to twist the conversation into "don't you think we'll reach that scale?" To justify complexity.
The bottom line is for better or worse, the cloud and micro services are keeping a lot of jobs relevant and there's no benefit in convincing people otherwise
I would really be interested in an actual comparison, where e.g. someone compares the full TCO of a mysql server with backup, hot standby in another data center and admin costs.
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
> Probably because actual time required to manage a db server is really unpredictable.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
Its a skillset that is out of favour at the moment as well but having someone who has done serverops and devops and can develop as well is a bit of a money saver generally because they open up possibilities that don't exist otherwise. I think its a skillset that no one really hired for past about 2010 when cloud was mostly taking off and got replaced with cloud engineers or pure devops or ops people but there used to be people with this mixed skillset in most teams.
There are many scenarios in which cloud providers (especially AWS) make sense.
Ideally, your company has technical experts who can do quite a lot of things non-cloud, so you can make informed decisions about near-term costs, complexity, vendor lock-in, execution speed, etc.
I'm especially a fan of cloud providers for early startups, which tend to be high on velocity, and low on workers. And the free credits programs often solve the early problem of being low on dollars.
If you’re going to write a post about why self-hosting is better than cloud*, then it’s probably a good idea to make sure your site loads in under a minute.
* at least I assume what this post is; I’m still waiting for it to load.
>This happens with Hetzner all the time because they have no VLANs and all customers are on a single LAN and IPFS tries to discover other nodes in the same LAN by default.
If all you need is compute, than yeah, self hosting is easy. Otherwise, do you think just about every company under the sun is a sucker for being on the cloud? If it was so easy, companies would be either be constantly dropping prices to compete with all the self hosters, or new companies to fill in the price gaps.
A 2024 International LT Series semi-truck costs $130,000. That's very expensive compared to a $30,000 Ford Maverick.
Both of these trucks can technically be used to pick up groceries and commute. But, uh, if you bought the semi-truck to get groceries and commute? Nobody scammed you; you bought the wrong truck. You don't have to buy the biggest, most expensive truck to do small jobs. But also, just because there's a cheaper truck available, doesn't mean the semi-truck is overpriced or a scam. The semi is more expensive for a reason.
I wonder about people who write articles like these. I imagine at one point he believed he had to use the cloud, so he started using it without understanding what he was doing. As a result, he was charged a ton of money. He then found out there were cheaper, simpler ways to do what he wanted. And so, feeling hurt, embarrassed, and deceived, he writes this long screed, accusing people of scamming him - I mean scamming you - because you (not him!) could not possibly need to use the cloud, even though you (not him!) assumed you had to use the cloud.
Yes, dude. The cloud is expensive. Sorry you found out the hard way. And for what it's worth, you don't need a datacenter either; stick a 1U in your closet and call it a day.
Sometimes I think I am out of the loop for using dedicated servers like OVH, DigitalOcean, and Hetzner, while others spend thousands of dollars for the things I spend barely a few hundred. This always made me think I am not a good developer enough to know the cutting-edge things others know.
Turns out most of the developers suck at handling barebones with a Linux distro + nginx and some other plugins to do the same things as the fancy-named aws stuff. If you are in the same boat, just know that most of these developers suck at what they are doing and don't care about the company budget.
You can get 99.99% of the things done with barebone + Cloudflare, including multiserver redundancy, at a fraction of AWS and Azure costs. Most of these technologies are just fancy words for basic Linux services.
I think a lot of teams using cloud are using SaaS rather than IaaS. They want a redis and a postgres and a S3 and a ... You can set all that up on a server, but it's not very fun if you've never done it before.
It's an informative post but I really dislike the language and style that are becoming common in this kind of posts, e.g.:
> Look, first of all, you’re as unique as the other 1000 peanut gallery enjoyers that have made the same astute observation before you. Congratulations. But you’re absolutely missing the point.
Why it feels like the author is too young and just had breaking discovery that he can have servers without clouds!? Always been a thing, clouds were/are used in areas where it would be better, say some integration with already existing infrastructure, or some quick scaling. Just like everything, there’s always upside and downside, and it’s just about what suits your needs. The author should next try an on-prem approach where he even controls the hardware, even more cheaper but with extra maintenance. For example, I found a used server a while ago (44 Core HP Z840 WorkStation Dual Xeon E5-2699 V4 512GB RAM) for around $1000, that’s a one time pay.
The idea that it's cheaper not to use AWS is clear.
I was hoping to see more about porting AWS proprietary features into generic servers.
A big part of the problem isn't just monthly rent, it's vendor lock-in. When your whole system is implemented using AWS specific features, you're not going to run anywhere else.
AWS, and any other third party vendor, can and does obsolete features. Then you're having to port your system just to keep it running on the third party service.
Once you're implemented in a generic server, in a VPS, or your mom's basement, you're free to move to any other hosting provider, data center, whatever.
The loss of understanding that 3rd party dependencies are not good for your company or project, seems a bigger loss to the technical community than FTP hacking...
Jeez, this was a painful read. I actually stopped after a few paragraphs and asked AI to make it more technically focused and remove the ranting so I could stomach it.
Strawman arguments, ad hominem attacks and Spongebob mocking memes, and the casual venturing into conspiracy theories and malicious intentions...
> Why do all these people care if I save more money or not? ... If they’re wrong, and if I and more people like me manage to convince enough people that they’re wrong, they may be out of a job soon.
I have a feeling AWS is doing fine without him. Cloud is one of the fastest growing areas in tech because their product solves a need for certain people. There is no larger conspiracy to keep cloud in business by silencing dissent on Twitter.
> You will hear a bunch of crap from people that have literally never tried the alternative. People with no real hands-on experience managing servers for their own projects for any sustained period of time.
This is more of a rant than a thoughtful technical article. I don't know what I was expecting, because I clicked on the title knowing it was clickbait, so shame on me, I guess...
> Most people complaining about what I did happen to have “devops”, “cloud engineer”, “serverless guy”, “AWS certified”, or something similar in their bio.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Sorry but my $3 AWS instance is still cheaper than all of those options.
If you need a lot of, well, anything, be it compute, memory, storage, bandwidth etc., of course cloud stuff is going to be more expensive... but if you don't need that, then IMO $3/mo on-demand pricing really can't be beat when I don't have to maintain any equipment myself. Oracle also offers perpetually free VM instances if you don't mind the glow.
With a quick LLM-assisted search, looks like the cheapest EC2 instance is t4g.micro, which comes in at $2.04/mo. It has 2 vCPUs and and only 512MiB of RAM. (I assume that doesn't include disk; EBS will be extra.)
I can certainly see a use for that small amount of compute & RAM, but it's not clear that your level of needs is common. I've been paying for a $16/mo VPS (not on AWS) for about 15 years. It started out at $9/mo, but I've upgraded it since then as my needs have grown. It's not super beefy with 2 vCPUs, 5GiB of RAM, and 60GiB of disk space (with free data ingress/egress), but it does the job, even if I could probably find it cheaper elsewhere.
But not at Amazon. Closest match is probably a t3.medium, with 2 vCPUs and 4GiB RAM. Add a 60GiB gp2 EBS volume, and it costs around $35/mo, and that's not including data transfer.
The point that you're missing is we're not looking for the cheapest thing ever, we're looking for the cheapest thing that meets requirements. For many (most?) applications, you're going to overpay (sometimes by orders of magnitude) for AWS.
You say "if you need a lot", but "lot" is doing a bit of work there. My needs are super modest, certainly not "a lot", and AWS is by far not the cheapest option.
I run heaps of services on AWS and my bill is ~$2-3 - I'm not running any EC2 instances at all. Some of the offerings these cloud providers offer are extremely affordable if you know how to play your cards right and use the right services.
Just get a raspberry pi and run it from your own home internet. You should already be paying for a VPN service and your regular internet service, so you should be able to trivially work out a self-hosted solution. You'll recover your costs inside of two years and come out the other end better off for it.
Don't give the big cloud companies an inch if you don't absolutely have to. The internet needs and deserves the participation of independent people putting up their own services and systems.
Amazon really doesn't care if your $10,000 bed folds up on you like a sandwich and cooks you when AWS us-east-1 goes down, or stops your smart toilet from flushing, or sets bucket defaults that allow trivial public access to information you assume to be secure, because nobody in their right mind would just leave things wide open.
Each and every instance of someone doing something independently takes money and control away from big corporations that don't deserve it, and it makes your life better. You could run pihole and a slew of other useful utilities on your self-hosted server that benefit anyone connected to your network.
AI can trivially walk you through building your own self-hosted setups (or even set things up for you if you entrust it with an automation MCP.)
Oracle and AWS and Alphabet and the rest shouldn't profit from eating the internet - the whole world becomes a better place every time you deny them your participation in the endless enshittification of everything.
yet another obsessive take on "cloud is bad and expensive" eh? I think they vastly forget the value of some SaaS offerings in terms of time saving for small companies. running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people. sure if the setup is simple and only requires a few classic components, this is way cheaper and for a 99.9% SLA will work fine. otherwise it only makes sense if you had very large cloud bills and can dedicate multiple engineers to the newly created tasks.
I think we've gone a little nuts defining "production system" these days. I've worked for companies with zero-downtime deployments and quite a lot of redundancy for high availability, and for some applications it's definitely worthwhile.
But I think for many (most?) businesses, one nine is just fine. That's perfectly doable by one person, even if you want, say, >=96% uptime, which allows for 350 hours of downtime per year. Even two nines allows for ~88 hours of downtime per year, and one person could manage that without much trouble.
Most businesses aren't global. Downtime outside regular business hours for your timezone (and perhaps one or two zones to the west and east of you) is usually not much of a problem, especially if you're running a small B2B service.
For a small business that runs on 1-3 servers (probably very common!), keeping a hot spare for each server (or perhaps a single server that runs all services in a lower-supported-traffic mode) can be a simple way to keep your uptime high without having to spend too much time or money. And people don't have to completely opt out of the cloud; there are affordable options for e.g. managed RDBMS hosting that can make maintenance and incident response significantly easier and might be a good choice, depending on your needs.
(Source: I'm building a small one-person business that is going to work this way, and I've been doing my research and gaming it out.)
One thing that AWS, Google and Azure do that your own systems don't is release their updates whenever it suits them, often taking down your business down in the middle of the day with their own problems. You can't fix it, you can't rollback what you just did and get back up and running you just have to sit and wait.
That is quite different to a business that turns off its boxes for an hour at 0100 Sunday morning to do updates and release new software. The downtime isn't equivalent because it really matters when it is and if that hurts your use case or not. Your own system might be down for more hours a year than AWS, but its not down Monday to Friday on an evening when you do most your sales because you refuse to touch anything during that period and do all the work outside that and schedule your updates.
It also feels like AWS (or Azure) isn't really that much more reliable than your own thing. But half the internet is down at the same time so you don't get blamed as much.
Its the "No one gets blamed for going IBM" thing in the modern era. They are making it someone elses fault and absolves the blame. The problem is if your competitor is still up you could be loosing customers on average mid day outage, even if they are down for 3x as long its not when it matters.
> running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people
Then don't. If your team and budget are small enough not to hire a sysadmin, then your workload is (almost certainly) small enough to fit on one server, one Postgres database, Jenkins or a bash script, and certainly no k8s.
The post is about that 99% of companies that will never go large scale. Its point is that they don't need cloud, buying a server or two is all they need.
I dislike those black and white takes a lot. It's absolutely true that most startups that just run an EC2 instance will save a lot of cash going to Hetzner, Linode, Digital Ocean or whatever. I do host at Hetzner myself and so do a lot of my clients.
That being said, the cloud does have a lot of advantages:
- You're getting a lot of services readily available. Need offsite backups? A few clicks. Managed database? A few clicks. Multiple AZs? Available in seconds.
- You're not paying up-front costs (vs. investing hundreds of dollars for buying server hardware) and everything is available right now [0]
- Peak-heavy loads can be a lot cheaper. Mostly irrelevant for you average compute load, but things are quite different if you need to train an LLM
- Many services are already certified according to all kinds of standards, which can be very useful depending on your customers
Also, engineering time and time in general can be expensive. If you are a solo entrepreneur or a slow growth company, you have a lot of engineering time for basically free. But in a quick growth or prototyping phase, not to speak of venture funding, things can be quite different. Buying engineering time for >150€/hour can quickly offset a lot of saving [1].
Does this apply to most companies? No. Obviously not. But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
[0] Compared to the rack hosting setup described in the post. Hetzner, Linode, etc. do provide multiple AZs with dedicated servers.
[1] Just to be fair, debugging cloud errors can be time consuming, too, and experienced AWS engineers will not be cheaper. But an RDS instance with solid backups-equivalent will usually not amortize quickly, if you need to pay someone to set it up.
You don't actually need any of those things until you no longer have a "project", but a business which will allow you to pay for the things you require.
You'd be amazed by how far you can get with a home linux box and cloudflare tunnels.
On this site, I've seen these kind of takes repeatedly over the past years, so I went ahead and built a little forum that consists of a single Rust binary and SQLite. The binary runs on a Mac Mini in my bedroom with Cloudflare tunnels. I get continuous backups with Litestream, and testing backups is as trivial as running `litestream restore` on my development machine and then running the binary.
Despite some pages issuing up to 8 database queries, I haven't seen responses take more than about 4 - 5 ms to generate. Since I have 16 GB of RAM to spare, I just let SQLite mmap the whole the database and store temp tables in RAM. I can further optimize the backend by e.g. replacing Tera with Askama and optimizing the SQL queries, but the easiest win for latency is to just run the binary in a VPS close to my users. However, the current setup works so well that I just see no point to changing what little "infrastructure" I've built. The other cool thing is the fact that the backend + litestream uses at most ~64 MB of RAM. Plenty of compute and RAM to spare.
It's also neat being able to allocate a few cores on the same machine to run self-hosted GitHub actions, so you can have the same machine doing CI checks, rebuilding the binary, and restarting the service. Turns out the base model M4 is really fast at compiling code compared to just about every single cloud computer I've ever used at previous jobs.
Just one of the couple dozen databases we run for our product in the dev environment alone is over 12 TB.
How could I not use the cloud?
https://www.seagate.com/products/enterprise-drives/exos/exos...
> one of the couple dozen databases
I guess this is one of those use cases that justify the cloud. It's hard to host that reliably at home.
12 TB fits entirely into the RAM of a 2U server (cf. Dell PowerEdge R840).
However, I think there's an implicit point in TFA; namely, that your personal and side projects are not scaling to a 12 TB database.
With that said, I do manage approximately 14 TB of storage in a RAIDZ2 at my home, for "Linux ISOs". The I/O performance is "good enough" for streaming video and BitTorrent seeding.
However, I am not sure what your latency requirements and access patterns are. If you are mostly reading from the 12 TB database and don't have specific latency requirements on writes, then I don't see why the cloud is a hard requirement? To the contrary, most cloud providers provide remarkably low IOPS in their block storage offerings. Here is an example of Oracle Cloud's block storage for 12 TB:
https://docs.oracle.com/en-us/iaas/Content/Block/Concepts/bl...Those are the kind of numbers I would expect of a budget SATA SSD, not "NVMe-based storage infrastructure". Additionally, the cost for 12 TB in this storage class is ~$500/mo. That's roughly the cost of two 14 TB hard drives in a mirror vdev on ZFS (not that this is a good idea btw).
This leads me to guess most people will prefer a managed database offering rather than deploying their own database on top of a cloud provider's block storage. But 12 TB of data in the gp3 storage class of RDS costs about $1,400/mo. That is already triple the cost of the NAS in my bedroom.
Lastly, backing up 12 TB to Backblaze B2 is about $180/mo. Given that this database is for your dev environment, I am assuming that backup requirements are simple (i.e. 1 off-site backup).
The key point, however, is that most people's side projects are unlikely to scale to a 12 TB dev environment database.
Once you're at that scale, sure, consider the cloud. But even at the largest company I worked at, a 14 TB hard drive was enough storage (and IOPS) for on-prem installs of the product. The product was an NLP-based application that automated due diligence for M&As. The storage costs were mostly full-text search indices on collections of tens of thousands of legal documents, each document could span hundreds to thousands of pages. The backups were as simple as having a second 14 TB hard drive around and periodically checking the data isn't corrupt.
Still missing the point. This is just one server and just in the dev enviornment?
How many pets do you want to be tending to? I have 10^5 servers I'm responsible for...
The quantity and methods the cloud affords me allow me to operate the same infrastructure with 1/10th as much labor.
At the extreme ends of scale this isn't a benefit, but for large companies in the middle this is the only move that makes any sense.
99% of posts I read talking about how easy and cheap it is to be in the datacenter all have a single digit number of racks worth of stuff. Often far less.
We operate physical datacenters as well. We spend multiple millions in the cloud per month. We just moved another full datacenter into the cloud and the difference in cost between the two is less than $50k/year. Running in physical DCs is really inefficient for us for a long of annoying and insurmountable reasons. And we no longer have to deal with procurement and vendor management. My engineers can focus their energy on more valuable things.
After having worked at several startups using AWS, I don't really buy the "pets vs cattle" argument, because the "cattle" turn out to be "pets" that require maintenance of a large collection of Terraform configuration, Helm charts (or Kustomize manifests), and we had to do things differently depending on whether you were on dev, staging, or prod environment. This is ignoring how ad-hoc and "pet-like" the "cattle" can become when a codebase needs a long-running feature branch and a special environment provisioned for testing the feature.
It was really telling when I had to help a frontend developer debug his local Kubernetes cluster and he randomly said "I don't get why we need so much configuration. I thought the point of Docker was that you just pull the image and run it". So yeah, we had backend developers (and a devops guy) going all-in on these "cattle", but they sure behaved like "pets" in every single aspect :)
So, to answer your question: "How many pets do you want to be tending to?" I certainly don't want pets that speak HCL unofficially (and the official path is some loose YAML schema whose permissions configuration breaks over time). It's questionable how much time one really is saving in this situation.
In fact, I think "How many pets do you want to be tending to?" is an inaccurate question; the more accurate question to ask is: "what do you want your day-2 maintenance to look like?" Each technology has its own "day 2 maintenance". The "day 2 maintenance" of e.g. SQS, is certainly a lot different compared to, for instance, using PostgreSQL's SKIP LOCKED as a queue. Most backend developers have experience with Postgres and its tooling; can't say the same about SQS (and its myriad of mocks for local development).
The calculus changes depending on scale -- in fact, not even Kubernetes can scale past 10^3 nodes[0] -- but let's also be honest: how many cloud users are managing so many servers (10^5 is huge!) that not even Kubernetes can handle it?
[0] https://kubernetes.io/docs/setup/best-practices/cluster-larg...
Why do people think it takes "labor" to have a server up and running?
Multiple millions in the cloud per month?
You could build a room full of giant servers and pay multiple people for a year just on your monthly server bill.
First of all, if you have a dev DB that’s 12 TB, I can practically guarantee that it is tremendously unoptimized.
But also, that’s extremely easily handled with physical servers - there are NVMe drives that are 10x as large.
12 TB is easy. https://yourdatafitsinram.net/
What's your cloud bill?
You can get quite far without that box, even, and just use Cloudflare R2 as free static hosting.
CloudFlare Pages is even easier for static hosting with automatic GitHub pulls.
Happy Netlify customer here, same deal. $0.
(LOL 'customer'. But the point is, when the day comes, I'll be happy to give them money.)
> But the cloud is not too expensive - you're paying for stuff you don't need. That's an entirely different kind of error.
Agreed. These sort of takedowns usually point to a gap in the author's experience. Which is totally fine! Missing knowledge is an opportunity. But it's not a good look when the opportunity is used for ragebait, hustlr.
> A few clicks.
Getting through AWS documentation can be fairly time consuming.
Figuring out how to do db backups _can_ also be fairly time consuming.
There's a question of whether you want to spend time learning AWS or spend time learning your DB's hand-rolled backup options (on top of the question of whether learning AWS's thing even absolves you of understanding your DB's internals anyways!)
I do think there's value in "just" doing a thing instead of relying on the wrapper. Whether that's easier or not is super context and experience dependent, though.
Hmmm, I think you have to figure out how to do your database backups anyway as trying to get a restorable backup out of RDS to use on another provider seems to be a difficult task.
Backups that are stored with the same provider are good, providing the provider is reliable as a whole.
(Currently going through the disaster recovery exercise of, "What if AWS decided they didn't like us and nuked our account from orbit.")
aws would never do that :) plus you can also do it in aws with like 75 clicks around UI which makes no sense even when you are tripping on acid
> Figuring out how to do db backups _can_ also be fairly time consuming.
apt install automysqlbackup autopostgresqlbackup
Though if you have proper filesystem snapshots then they should always see your database as consistent, right? So you can even skip database tools and just learn to make and download snapshots.
most definitely do not want to spend time learning aws… would rather learn about typewriter maintenance
And making sure you're not making a security configuration mistake that will accidentally leak private data to the open internet because of a detail of AWS you were unaware of.
gotta say, Amazon Q can do the details for you in many cases.
any serious business will(might?) have hundreds of Tbs of data. I store that in our DC and with a 2nd DC backup for about 1/10 the price of what it would cost in S3.
When does the cloud start making sense ?
In my case we have a B2B SaaS where access patterns are occasional, revenue per customer is high, general server load is low. Cloud bills just don’t spike much. Labor is 100x the cost of our servers so saving a piddly amount of money on server costs while taking on even just a fraction of one technical employee’s worth of labor costs makes no sense.
linode was better and had cheaper pricing before being bought by akamai
I don’t feel like anything really changed? Fairly certain the prices haven’t changed. It’s honestly been pleasantly stable. I figured I’d have to move after a few months, but we’re a few years into the acquisition and everything still works.
I concur with every word.
Akamai has some really good infrastructure, and an extremely competent global cdn and interconnects. I was skeptical when linode was acquired, but I value their top-tier peering and decent DDoS mitigation which is rolled into the cost.
Whoa, an acquisition made things worse for everyone but the people who cashed out? Crazy, who could have seen that coming
Guess you came for the hot take without actually using the service or participating in any intelligent conversation. All the sibling comments observe that nothing you are talking about happened.
Snarky ignorant comments like yours ruin Hacker News and the internet as a whole. Please reconsider your mindset for the good of us all.
No longer getting DDOSed multiple years in a row on Christmas Eve is worth whatever premium Akamai wants to charge over old Linode.
You're literally playing into what the author is criticizing.
I started out with linode, a decade ago.
It became much more expensive than AWS, because it bundled the hard drive space with the RAM. Couldn't scale one without scaling the other. It was ridiculous.
AWS has a bunch of startup credits you can use, if you're smart.
But if you want free hosting, nothing beats just CloudFlare. They are literally free and even let you sign up anonymously with any email. They don't even require a credit card, unlike the other ones. You can use cloudflare workers and have a blazing fast site, web services, and they'll even take care of shooing away bots for you. If you prefer to host something on your own computer, well then use their cache and set up a cloudflare tunnel. I've done this for Telegram bots for example.
Anything else - just use APIs. Need inference? Get a bunch of Google credits, and load your stuff into Vertex or whatever. Want to take payments anonymously from around the world? Deploy a dapp. Pay nothing. Literally nothing!
LEVEL 2:
And if you want to get extra fancy, have people open their browser tabs and run your javascript software in there, earning your cryptocurrency. Now you've got access to tons of people willing to store chunks of files for you, run GPU inference, whatever.
Oh do you want to do distributed inference? Wasmcloud: https://wasmcloud.com/blog/2025-01-15-running-distributed-ml... ... but I'd recommend just paying Google for AI workloads
Want livestreaming that's peer to peer? We've got that too: https://github.com/Qbix/Media/blob/main/web/js/WebRTC.js
PS: For webrtc livestreaming, you can't get around having to pay for TURN servers, though.
LEVEL 3:
Want to have unstoppable decentralized apps that can even run servers? Then use pears (previously dat / hypercore). If you change your mindset, from server-based to peer to peer apps, then you can run hypercore in the browser, and optionally have people download it and run servers.
https://pears.com/news/building-apocalypse-proof-application...
I'm fully aware this is pedantic, but you can't save 10x. You can pay 1/10. You can save 90%. Your previous costs could have been 10x your current costs. But 10x is more by definition, not less. You can't save it.
In English, x or time(s) after a number marks a "unit" used by various verbs. A 10x increase. Increase by 10x. Go up 10x. Some of these verbs are negative like decrease or save. "Save 10x" is the same as "divide by 10". Four times less, 5 times smaller etc. are long attested.
Agree to disagree.
No, x literally means multiply. It doesn't somehow also mean divide. They should use the percent sign, it's what it is for. 10x my costs means 10 x mycost, it's literally an equation
It's just inversion, like 2 to the power of 2 or 2 to the power of negative 2. These negative words inverse it just the same. You may dislike it, but millions of people have spoken this way for a long time.
> x literally means multiply
And some use the dot operator or even 2(3) or (2)(3). When programming, we tend to use *.
hmm.. if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster, or would you have the same quibble there too?
Edit: Thinking about this some more: You could say you are saving 9x [of the new cost], and it would be a correct statement. I believe the error is assuming the reference frame is the previous cost vs the new cost, but since it is not specified, it could be either.
> if you reduce latency from one second to a hundred milliseconds, could you celebrate that you've made it 10x faster
Yes you can, because speed has units of inverse time and latency has units of time. So it could be correct to say that cutting latency to 1/10 of its original value is equivalent to making it 10x the original speed - that's how inverses work.
Savings are not, to my knowledge, measured in units of inverse dollars.
+1 words matter
Clarity of expression is a superpower
I don’t feel it’s pedantic at all.
People commonly use this expression in everyday conversation, such as, "you could save 10 times as much if you would just shop at Costco." So I agree with OP, their comment is correct but pedantic.
Colloquially, differences in powers of 10 would be better stated as differences of orders of magnitude.
But that's 4x the savings compared to another saving. I suppose you've upped the pedantry and are technically correct, but that's a pretty narrow use case and not the one used in the article.
This becomes much clearer with a balance sheet in front of you.
What is saving? _Spending less_, that's all. Saving generates no income, it makes you go broke slower.
Independent of the price or the product, you can never save more than factor 1.0 (or 100%).
Wasn't there a guy on TV who wanted to make prices go down 1500%? Same BS, different flavor.
I could care less.
So .... you DO care then? Or do you mean "I could NOT care less"?
Yes, that is why I replied with this to a pendant. :)
Consider it as getting 10x the resources for the same price - that is, the resource-to-price ratio is 10x. Except you don't need 10x the resources so you choose to get 1x the resources for 0.1x the price instead.
Sure. Getting 10x the resources for the same price is another valid way to express the thought. Saving 10x isn't, though.
Apples and oranges, tbh.
The author touches on it briefly, but I'd argue that the cloud is immensely helpful for building (and tearing down) an MVP or proving an early market for a new company using startup credits or free tiers offered by all vendors. Once a business model has been proven, individual components and the underlying infrastructure can be moved out of the cloud as soon as cost becomes a concern.
This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner, and developers must stay disciplined to keep components portable from day one, but you can get a lot of mileage out of free credits without burning dollars on any infrastructure. The biggest challenge becomes finding the time to perform these migrations among other competing priorities, such as new feature development, especially if you're growing fast.
Our startup is mostly built on Google Cloud, but I don't think our sales rep is very happy with how little we spend or that we're unwilling to "commit" to spending. The ability to move off of the cloud, or even just to another cloud, provides a lot of leverage in the negotiating seat.
Cloud vendors can also lead to an easier risk/SLA conversation for downstream customers. Depending on your business, enterprise users like to see SLAs and data privacy laws respected around the globe, and cloud providers make it easy to say "not my problem" if things are structured correctly.
Seems like nowadays people seem less concerned with vendor lockin than they were 15 years ago. One of the reason to want to avoid lockin is to be able to move when the price gouging gets just a little bit too greedy that the move is worth the cost. One of the drawbacks of all these built in services at AWS is the expense of trying to recreate the architecture elsewhere.
> This means that teams must make an up-front architectural decision to develop apps in a server-agnostic manner
Right. But none of the cloud providers encourage that mode of thinking, since they all have complete different frontends, API's, different versions of the same services (load balancers, storage) etc. Even if you standardize on k8s, the implementation can be chalk and cheese between two cloud providers. The lock in is way worse with cloud providers.
I'd be more interested to understand (from folk who were there) what the conditions were that made AWS et al such a runaway hit. What did folks gain, and have those conditions meaningfully changed in some way that makes it less of a slam dunk?
My recollection from working at a tech company in the early 2010s is that renting rack space and building servers was expensive and time consuming, estimating what the right hardware configuration would be for your business was tricky, and scaling different services independently was impossible. also having multi regional redundancy was rare (remember when squarespace was manually carrying buckets of petrol for generators up many flights of stairs to keeps servers online post sandy?[1]).
AWS fixed much of that. But maybe things have changed in ways that meaningfully changes the calculus?
[1] https://www.squarespace.com/press-coverage/2012-11-1-after-s...
You're falling into the false dichotomy that always comes up with these topics: as if the choice is between the cloud and renting rack space while applying your own thermal paste on the CPUs. In reality, for most people, renting dedicated servers is the goldilocks solution (not colocation with your own hardware). You get an incredible amount of power for a very reasonable price, but you don't need to drive to a datacenter to swap out a faulty PSU, the on site engineers take care of that for you. I ordered an extra server today from Hetzner. It was available 90 seconds afterwards. Using their installer I had Ubuntu 24.04 LTS up and running, and with some Ansible playbooks to finish configuration, all in all from the moment of ordering to fully operational was about 10 minutes tops. If I no longer need the server I just cancel it, the billing is per hour these days.
Bang for the buck is unmatched, and none of the endless layers of cloud abstraction getting in the way. A fixed price, predictable, unlimited bandwidth, blazing fast performance. Just you and the server, as it's meant to be. I find it a blissful way to work.
True, but I think you're touching on something important regarding value. Value is different depending on the consumer: for you, you're willing and able to manage more of the infrastructure than someone who has a more narrow skillset. Being able to move the responsibility for areas of the service on to the provider is what we're paying for, and for some, paying more money to offload more of the responsibility actually results in more value for the organization/consumer
> I ordered an extra server today from Hetzner. It was available 90 seconds afterwards.
Back when AWS was starting, this would have taken 1-3 days.
AWS also made huge inroads in big companies because engineering teams could run their own account off of their budget and didn’t have to go through to IT to requisition servers, which was often red tape hell. In my experience it was just as much about internal politics as the technical benefits.
> which was often red tape hell
Seconded. I was working for a storage vendor when AWS was first ascendant. After we delivered hardware, it was typically 6-12 weeks to even get it powered up, and often a few weeks longer to complete deployment. This is with professional services, e.g. us handling the setup once we had wires to plug in. Similar lead time for ordering, racking, and provisioning standard servers.
The paperwork was massive, too. Order forms, expense justifications, conversations with Legal, invoices, etc. etc.
And when I say 6-12 weeks, I mean that was a standard time - there were outliers measured in months.
Absolutely. At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card. With AWS, leadership suddenly accepted that Ops/Dev could provision what we thought was right. It isn’t logically compelling, but that’s why the cloud gained traction so quickly: it removed friction.
> At several startups, getting a simple €20–50/month Hetzner server meant rounds with leadership and a little dance with another department to hand over a credit card.
That's not a startup if you can't go straight to the founder and get a definite yes/no answer in a few minutes.
Computing power (compute, memory, storage) has increased 100x or more since 2005, but AWS prices are not proportionally cheaper. So where you were getting a reasonable value in ~2012, that value is no longer reasonable, and by an increasing margin.
This is the big one.
In 2006 when the first EC2 instances showed up they were on par with an ok laptop and would take 24 months to pay enough in rent to cover the cost of hardware.
Today the smallest instance is a joke and the medium instances are the size of a 5 year old phone. It takes between 3 to 6 months to pay enough in rent to cover the cost of the hardware.
What was a great deal in 2006 is a terrible one today.
One factor was huge amounts of free credits for the first year or more for any startup that appeared above-board and bothered to ask properly.
Second, egress data being very expensive with ingress being free has contributed to making them sticky gravity holes.
The free credits... what a WILD time! Just show up to a hackathon booth, ask nicely, and you'd get months/years worth of "startup level" credits. Nothing super powerful - basically the equivalent of a few quad core boxes in a broom closet. But still for "free".
> But maybe things have changed in ways that meaningfully changes the calculus?
I'd argue that Docker has done that in a LOT of ways. The huge draw to AWS, from what I recall with my own experiences, was that it was cheaper than on-prem VMware licenses and hardware. So instead of virtualizing on proprietary hypervisors, firms outsourced their various technical and legal responsibilities to AWS. Now that Docker is more mature, largely open source, way less resource intensive, and can run on almost any compute hardware made in the last 15 years (or longer), the cost/benefit analysis starts to favor moving off AWS.
Also AWS used to give out free credits like free candy. I bet most of this is vendor lock in and a lot of institutional brain drain.
Et al is for people, Et cetera is for things.
Edit: although actually many people on here are American so I guess for you aws is legally a person...
As an American who studied Latin:
Et al. = et alii, "and other things", "among other things".
Etc. = et cetera, "and so on".
Either may or may not apply to people depending on context.
> although actually many people on here are American so I guess for you aws is legally a person...
Corporate legal personhood is actually older than Christianity, and it being applied to businesses (which were late to the game of being allowed to be corporations) is still significantly older than the US (starting with the British East India Company), not a unique quirk of American law.
Oh I didn't how that, thanks for the lesson.
Tbf it just sounds...so American, so I assumed, my bad. But East India Company was involved...whew I guess that does make sense, oof.
What is unique in the US is the interaction between corporate personhood and our First Amendment and the way that our courts have applied that to limit political campaign finance laws, and a lot of “corporate personhood” controversy is really about that, not actually about corporate personhood as a broad concept.
People also get confused about the Citizens United ruling. It had nothing to do with corporate personhood.
The ruling said that since a person has first amendment rights, those same rights extend to a group of people—any group—whether it’s a non profit organization, a corporation, or something else.
I don't think that's a hard and fast rule? I think et al is for named, specific entities of any kind. You might say "palm trees, evergreens trees, etc" but "General Sherman, Grand Oak, et al"
It was the accountants. CapEx vs. OpEx.
Basically it boils down to:
Is your time worth more than what the fully managed services on AWS cost? And I mean that quite literally in the sense of your billable hours.
If you like spending time tinkering with manually configuring linux servers, and don't have anything else that generates better value for you to do - by all means go for it.
For small pet projects with no customers, a cheap Hetzner box is probably fine. For serious projects with customers that expect you to be able to get back on your feet quickly when shit hits the fan, or teams of developers that lose hours of productivity when a sandbox environment goes down, maybe not so much.
Is AWS more expensive than the salary of the infra guy you would need to hire to do all this stuff by hand? Probably not.
Its not actually that hard to get your own server racked up in a data centre, I have done it. Since it was only one box that I built and installed at home I just shipped it and they racked it in the shared area and plugged the power and network in and gave me the IP address. It was cheaper than renting from someone like hetzner, was about £15 a month at the time for 1A and 5TB a month of traffic at 1gbps. Also had a one off install fee of £75.
At the time I did this no one had good gaming CPUs in the cloud, they are still a bit rare really especially in VPS offerings and I was hosting a gaming community and server. So I made a pair of machines in a 1U with dual machines in there and had a public and private server with raid 1 drives on both and redundant power. Ran that for a gaming server for many years until it was obsolete. It wasn't difficult and I think the machine was about £1200 in all, which for 2 computers running game servers wasn't too terrible.
I didn't do this because it was necessarily cheaper, I did it because I couldn't find a cloud server to rent with a high clockspeed CPU in it. I tested numerous cloud providers, sent emails asking for specs and after months of chasing it down I didn't feel like I had much choice. Turned out to be quite easy and over the years it saved a fortune.
What is the mechanism for such services if you want to replace a component (ex. a failing hard drive or upgrade ram)?
"Remote hands" is the DC term for exactly what it sounds like. You write a list of instructions and someone hired by the DC will go over to your rack and do the thing.
the problem isn't setup. its maintaining it. Its not an easy job to do that some times. Im not trying to dissuade people from running there own servers, but its something to consider.
Cheap shot maybe, but the fact that the page takes 10 seconds to load when it hits the HN front page is a great, inadvertant illustration of why you might want to use the cloud sometimes.
The failure mode of self-hosting is that your site gets hugged to death, the failure mode of the cloud is that you lose a ton of money. For a blog that doesn't earn you anything, the choice is clear.
Besides, you can just put it behind cloudflare for free.
> The failure mode of self-hosting is that your site gets hugged to death
Learn 2 load balance
If only it were that simple. Don't forget:
Learn 2 HA
Learn 2 MFA
Learn 2 backup
Learn 2 recover within RTO
Learn 2 ETL
Learn 2 queue
Learn 2 scale horizontally
Learn 2 audit log
Learn 2 SEIM
Learn 2 continuously gather SOC evidence
...
OP here. As others have said, it loads immediately for me (tested on desktop + on mobile data + incognito)
The entire site is cached + Cloudflare sits on top of everything. I just ran a couple performance tests under the current HN traffic (~120 concurrent visitors) and everything looks good, all loads under 1 second. The server is quite happy at an average load of 0.06 right now, not even close to start breaking a sweat.
Turns out you can get off the cloud and hit the frontpage of HN and your site will be alright.
100% true, I've hit the front page of hn on a server with an old i5 (aka consumer hardware, and not even high end) with no cloudflare or similar caching, and had no problems. Computers are fast, and serving static html over https is a solved problem.
That happens a lot to blogs deployed on the cloud too. They just need to put a small cache in front and they'll be able to serve one or two orders of magnitude more requests per second.
Took under a second. What part of the world are you located in?
Only took 1 second for me.
This isn’t a binary issue. I disagree with these “abandon the cloud” takes but do agree that most folks spend way way more than they should.
The biggest threat to cloud vendors is that everyone wakes up tomorrow and cost optimizes the crap out of their infrastructure. I don’t think it’s hyperbolic to say that global cloud spending could drop by 50% in 3 months if everyone just did a good audit and cleaned up their deployments.
> The biggest threat to cloud vendors is that everyone wakes up tomorrow and cost optimizes the crap out of their infrastructure
Well there's no danger of that. Even with AWS telling you exactly how to save money (they have a thousand different dashboards showing you where you can save, and even support will tell you), it'll still take you months to work through all the cost optimization changes. Since it's annoying and complicated to do, most people won't do it.
Their billing really is ridiculous. We have a TAM and use a reseller, and it's virtually impossible for us to see what we actually spend each month, what with the reseller discounts, enterprise PPA, savings plans, RIs, and multiple accounts. Their support reps even built us some kinda custom BI tool just to look at costs, and it's still not right.
Exactly this.
We have to use cloud because we're at the low end of 10^5 servers. Once you hit the high end of 10^3 this is really where you need to be.
Everything we're doing is highly inefficient because of decades of fast and loose development by immature software engineers...and having components in the stack that did the same.
If I had 5 years to rewrite and redesign the product to reflect today's computing reality, I could eliminate 90%+ of the cost. But I'll never get that kind of time. Not with 1000 more engineers and 1000 more years and the most willing customers.
You might get lucky enough that you and a bunch of your customers are so fed up with your company that you get to create the competition.
"To them, it’s way too convenient to be on AWS: not only it solves their problem, but it’s also a shiny object. It’s technically complex, it makes them look smart in front of other devs, "
why? why be so obnoxious to other people who you claim are being obnoxious to you. no need to read your blog post now/
I think the main thing holding people back from leaving the cloud is simple inertia. There was a time when the cloud was obviously the right choice. Static IPv4 addresses were becoming scarce, IPv6 had not been deployed widely enough, and cloud providers made it easy to stand up a server and some storage with high speed links on the cheap. But over time, things have changed. Rate limits, data caps, and egress fees are now normal (and costly). IPv6 is now deployed widely enough that you might be willing to just run an IPv6-only stack, which greatly simplifies running a server on-premise. And of course, we've all seen time and again how providers will carelessly lock out your cloud account for arbitrary reasons with little to no recourse. The time has come to own your infrastructure again. But that won't happen until people realize it's easy to do.
The first couple of paragraphs of price comparisons are useful. Then there are many paragraphs of sheer waffle. The author doesn't even seem able to define what "the cloud" is:
> The whole debate of “is this still the cloud or not” is nonsense to me. You’re just getting lost in naming conventions. VPS, bare metal, on-prem, colo, who cares what you call it. You need to put your servers somewhere. Sure, have a computer running in your mom’s basement if that makes you feel like you’re exiting the cloud more, I’ll have mine in a datacenter and both will be happy.
I read the whole thing and I didn't see any waffle. Sure, undeniably some excess word count, some emotion in responding to critics. But no waffle.
The "is this cloud or not" debate in the piece makes perfect sense. Who cares whether Hetzner is defined as "the cloud" or not? The point is, he left AWS without going to Azure or some other obvious cloud vendor. He took a step towards more hands on management. And he saved a ton of money.
The cheap hosting service they switched to is arguably "cloud".
If you can't drive to the location where your stuff is running, and then enter the building blindfolded, yet put your hands on the correct machine, then it's cloud.
Fittingly, his website was hugged to death
Loaded instantly for me (never visited, so not cached), in the central US.
It's almost like Clouds are really good at scaling and some rented server isn't! Perfect, almost poetic.
It’s entirely possible for a rented server to host a site that gets millions of views. It’s also entirely possible to make an AWS setup that chokes with 100.
I use Cloudflare in front of my personal stuff. Then it's just a quick DNS switch to go direct if I need to.
It's almost like nobody cares about scaling their blog.
For me, it is a lot simpler to host at Linode (or simpler) than figure out the AWS/GCP crazy complex IAM stuff.
However, there are cases where being able to spin down the server, and not pay for downtime is useful - like 36-core Yocto build machines.
I've always found AWS IAM quite simple, but then again it is my job, so I might be biased. I haven't really dug into GCP well enough to understand it, but I did find it quite daunting to start the few times I messed with it. What's complex about it to you?
For personal projects, honestly, the built in roles AWS provides are okay enough for some semblance of least privilege x functionality IMO.
Plus, most of AWS's documentation tells you the specific policy JSON to use if you need to do XYZ thing, just fill in the blanks.
I have a VPS. It costs me £1.34 per month. It's way over-powered for what I need it for.
However, one situation where I think the cloud might be useful is for archive storage. I did a comparison between AWS Glacier Deep Storage and local many-hard-drive boxes, for storing PB-scale backups, and AWS just squeaked in as slightly cheaper, but only because you only pay for the amount you use, whereas if you buy a box then you have to pay for the unused space. And it's off-site, which is a resilience advantage. And the defrosting/downloading charge was acceptable at effectively 2.5 months worth of storage. However, at smaller scales you would probably win with a small NAS, and at larger scales you'd be able to set up a tape library and fairly comprehensively beat AWS for price.
Yeah, but in 800 months you'd come out ahead with a dedicated server in your closet.
I run a tiny local dedicated server 24/7 that consumes around 10W on average, which is about $2/mo in electricity costs where I live.
I meant the upfront cost of the machine.
Its a weird service because before that point AWS is crazy expensive for storage, especially down in the TB range its awful value compared to your box and drives. But once you get into that PB scale AWS actually seems to be competitive, I guess because the GB/TBs they are selling are from PB scale solutions and all the overhead that entails.
The cloud is a good idea. It becomes a bad idea when it is the only thing you know or, most likely, is the only cloud you know.
I've been at too many startups with a devops team that would rather provision 15 machines with 4GB RAM THAN ONE WITH 64GB.
I once got into an argument with a lead architect about it and it's really easy to twist the conversation into "don't you think we'll reach that scale?" To justify complexity.
The bottom line is for better or worse, the cloud and micro services are keeping a lot of jobs relevant and there's no benefit in convincing people otherwise
Multiple small boxes is actually better than one giant box, for a whole lot of reasons. Scaling isn't the issue.
What I always say when given a false choice: ¿porque no las dos?
vcpu, iops, transfer fees, storage -- they are all resources going into a pool .
If Hetzner is giving you 10TB for $100 , then host your static files/images there and save $800.
Apps are very modular. You have services, asyncs, LBs, static files . Just put the compute where it is most cost effective.
You don't have to close your AWS account to stick it to the man. Like any utility, just move your resources to where they are most affordable.
I would really be interested in an actual comparison, where e.g. someone compares the full TCO of a mysql server with backup, hot standby in another data center and admin costs.
On AWS an Aurora RDS is not cheap. But I don't have to spend time or money on an admin.
Is the cost justified? Because that's what cloud is. Not even talking about the level of compliance I get from having every layer encrypted when my hosted box is just a screwdriver away from data getting out the old school way.
When I'm small enough or big enough, self managed makes sense and probably is cheaper. But when getting the right people with enough redundancy and knowledge is getting the expensive part...
But actually - I've never seen this in any if these arguments so far. Probably because actual time required to manage a db server is really unpredictable.
> Probably because actual time required to manage a db server is really unpredictable.
This, and also startups are quite heterogeneous. If you have an engineer on your team with experience in hosting their own servers (or at least a homelab-person), setting up that service with sufficient resiliency for your average startup will be done within one relaxed afternoon. If your team consists of designers and engineers who hardly ever used a command line, setting up a shaky version of the same thing will cost you days - and so will any issue that comes up.
Its a skillset that is out of favour at the moment as well but having someone who has done serverops and devops and can develop as well is a bit of a money saver generally because they open up possibilities that don't exist otherwise. I think its a skillset that no one really hired for past about 2010 when cloud was mostly taking off and got replaced with cloud engineers or pure devops or ops people but there used to be people with this mixed skillset in most teams.
every box is a screwdriver away
There are many scenarios in which cloud providers (especially AWS) make sense.
Ideally, your company has technical experts who can do quite a lot of things non-cloud, so you can make informed decisions about near-term costs, complexity, vendor lock-in, execution speed, etc.
I'm especially a fan of cloud providers for early startups, which tend to be high on velocity, and low on workers. And the free credits programs often solve the early problem of being low on dollars.
If you’re going to write a post about why self-hosting is better than cloud*, then it’s probably a good idea to make sure your site loads in under a minute.
* at least I assume what this post is; I’m still waiting for it to load.
Loaded instantly for me. :)
sounds like the site isn't able to guarantee reliable service but it works for some people in an unpredictable way
As a note hetzner has a lot of auction servers and I believe they lack the setup fee
They have also threatened to cancel my account more than once because I typed "ipfs daemon".
https://github.com/ipfs/kubo/issues/10327
https://discuss.ipfs.tech/t/moved-ipfs-node-result-netscan-d...
>This happens with Hetzner all the time because they have no VLANs and all customers are on a single LAN and IPFS tries to discover other nodes in the same LAN by default.
Hetzner is also sinkholed by lots of EDR products because they host a ton of malicious garbage. They are a bad actor.
Why is it their job to be the arbiters of what customers are allowed to do on their platform?
To be fair, most hosting platforms have those in T&S, some even explicitly say you can't torrent pirated movies and even monitor your activities.
Same as AWS. I've added quite a few AWS ip ranges to my firewall.
Not trying to be dismissive of the article but, the way it's written, it reads like a lot of whining.
He could have summed up with "AWS is expensive, host your own server instead".
If all you need is compute, than yeah, self hosting is easy. Otherwise, do you think just about every company under the sun is a sucker for being on the cloud? If it was so easy, companies would be either be constantly dropping prices to compete with all the self hosters, or new companies to fill in the price gaps.
A 2024 International LT Series semi-truck costs $130,000. That's very expensive compared to a $30,000 Ford Maverick.
Both of these trucks can technically be used to pick up groceries and commute. But, uh, if you bought the semi-truck to get groceries and commute? Nobody scammed you; you bought the wrong truck. You don't have to buy the biggest, most expensive truck to do small jobs. But also, just because there's a cheaper truck available, doesn't mean the semi-truck is overpriced or a scam. The semi is more expensive for a reason.
I wonder about people who write articles like these. I imagine at one point he believed he had to use the cloud, so he started using it without understanding what he was doing. As a result, he was charged a ton of money. He then found out there were cheaper, simpler ways to do what he wanted. And so, feeling hurt, embarrassed, and deceived, he writes this long screed, accusing people of scamming him - I mean scamming you - because you (not him!) could not possibly need to use the cloud, even though you (not him!) assumed you had to use the cloud.
Yes, dude. The cloud is expensive. Sorry you found out the hard way. And for what it's worth, you don't need a datacenter either; stick a 1U in your closet and call it a day.
a simple valid point wrapped in an enormous amount of garbage arguments from both sides. watching idiots argue is exhausting
Help convince me I should be confident taking responsibility for:
* off-site db backups
* a guaranteed db restore process
* auditable access to servers
* log persistence and integrity
* timely security patching
* intrusion detection
so that my employer can save money.
How many URLs of Google et al failing to provide per instance security (leaking your files etc) to other users would you like to see?
At least one?
https://www.cnbc.com/2020/02/04/google-accidentally-sent-som...
The article is hugged to death. Maybe it wasn't hosted in the cloud?
Right, because it’s not possible for cloud services to get hugged to death.
Sometimes I think I am out of the loop for using dedicated servers like OVH, DigitalOcean, and Hetzner, while others spend thousands of dollars for the things I spend barely a few hundred. This always made me think I am not a good developer enough to know the cutting-edge things others know.
Turns out most of the developers suck at handling barebones with a Linux distro + nginx and some other plugins to do the same things as the fancy-named aws stuff. If you are in the same boat, just know that most of these developers suck at what they are doing and don't care about the company budget.
You can get 99.99% of the things done with barebone + Cloudflare, including multiserver redundancy, at a fraction of AWS and Azure costs. Most of these technologies are just fancy words for basic Linux services.
I always thought it was because they were working at a huge scale... but who knows.
Vercel is my favorite.. They charge you to pay for AWS.
I think a lot of teams using cloud are using SaaS rather than IaaS. They want a redis and a postgres and a S3 and a ... You can set all that up on a server, but it's not very fun if you've never done it before.
It's an informative post but I really dislike the language and style that are becoming common in this kind of posts, e.g.:
> Look, first of all, you’re as unique as the other 1000 peanut gallery enjoyers that have made the same astute observation before you. Congratulations. But you’re absolutely missing the point.
Why it feels like the author is too young and just had breaking discovery that he can have servers without clouds!? Always been a thing, clouds were/are used in areas where it would be better, say some integration with already existing infrastructure, or some quick scaling. Just like everything, there’s always upside and downside, and it’s just about what suits your needs. The author should next try an on-prem approach where he even controls the hardware, even more cheaper but with extra maintenance. For example, I found a used server a while ago (44 Core HP Z840 WorkStation Dual Xeon E5-2699 V4 512GB RAM) for around $1000, that’s a one time pay.
The idea that it's cheaper not to use AWS is clear.
I was hoping to see more about porting AWS proprietary features into generic servers.
A big part of the problem isn't just monthly rent, it's vendor lock-in. When your whole system is implemented using AWS specific features, you're not going to run anywhere else.
AWS, and any other third party vendor, can and does obsolete features. Then you're having to port your system just to keep it running on the third party service.
Once you're implemented in a generic server, in a VPS, or your mom's basement, you're free to move to any other hosting provider, data center, whatever.
The loss of understanding that 3rd party dependencies are not good for your company or project, seems a bigger loss to the technical community than FTP hacking...
Jeez, this was a painful read. I actually stopped after a few paragraphs and asked AI to make it more technically focused and remove the ranting so I could stomach it.
Strawman arguments, ad hominem attacks and Spongebob mocking memes, and the casual venturing into conspiracy theories and malicious intentions...
> Why do all these people care if I save more money or not? ... If they’re wrong, and if I and more people like me manage to convince enough people that they’re wrong, they may be out of a job soon.
I have a feeling AWS is doing fine without him. Cloud is one of the fastest growing areas in tech because their product solves a need for certain people. There is no larger conspiracy to keep cloud in business by silencing dissent on Twitter.
> You will hear a bunch of crap from people that have literally never tried the alternative. People with no real hands-on experience managing servers for their own projects for any sustained period of time.
This is more of a rant than a thoughtful technical article. I don't know what I was expecting, because I clicked on the title knowing it was clickbait, so shame on me, I guess...
Is this what I'm missing by not having Twitter?
> Most people complaining about what I did happen to have “devops”, “cloud engineer”, “serverless guy”, “AWS certified”, or something similar in their bio.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
― Upton Sinclair
"I FINALLY got everything off the cloud"
...
...
"P.S. follow me on Twitter"
So uh, not everything
It’s probably the whole point, so he can post later how he paid the $120 from twitter income and now running it for free :)
Sorry but my $3 AWS instance is still cheaper than all of those options.
If you need a lot of, well, anything, be it compute, memory, storage, bandwidth etc., of course cloud stuff is going to be more expensive... but if you don't need that, then IMO $3/mo on-demand pricing really can't be beat when I don't have to maintain any equipment myself. Oracle also offers perpetually free VM instances if you don't mind the glow.
With a quick LLM-assisted search, looks like the cheapest EC2 instance is t4g.micro, which comes in at $2.04/mo. It has 2 vCPUs and and only 512MiB of RAM. (I assume that doesn't include disk; EBS will be extra.)
I can certainly see a use for that small amount of compute & RAM, but it's not clear that your level of needs is common. I've been paying for a $16/mo VPS (not on AWS) for about 15 years. It started out at $9/mo, but I've upgraded it since then as my needs have grown. It's not super beefy with 2 vCPUs, 5GiB of RAM, and 60GiB of disk space (with free data ingress/egress), but it does the job, even if I could probably find it cheaper elsewhere.
But not at Amazon. Closest match is probably a t3.medium, with 2 vCPUs and 4GiB RAM. Add a 60GiB gp2 EBS volume, and it costs around $35/mo, and that's not including data transfer.
The point that you're missing is we're not looking for the cheapest thing ever, we're looking for the cheapest thing that meets requirements. For many (most?) applications, you're going to overpay (sometimes by orders of magnitude) for AWS.
You say "if you need a lot", but "lot" is doing a bit of work there. My needs are super modest, certainly not "a lot", and AWS is by far not the cheapest option.
I run heaps of services on AWS and my bill is ~$2-3 - I'm not running any EC2 instances at all. Some of the offerings these cloud providers offer are extremely affordable if you know how to play your cards right and use the right services.
Just get a raspberry pi and run it from your own home internet. You should already be paying for a VPN service and your regular internet service, so you should be able to trivially work out a self-hosted solution. You'll recover your costs inside of two years and come out the other end better off for it.
Don't give the big cloud companies an inch if you don't absolutely have to. The internet needs and deserves the participation of independent people putting up their own services and systems.
Amazon really doesn't care if your $10,000 bed folds up on you like a sandwich and cooks you when AWS us-east-1 goes down, or stops your smart toilet from flushing, or sets bucket defaults that allow trivial public access to information you assume to be secure, because nobody in their right mind would just leave things wide open.
Each and every instance of someone doing something independently takes money and control away from big corporations that don't deserve it, and it makes your life better. You could run pihole and a slew of other useful utilities on your self-hosted server that benefit anyone connected to your network.
AI can trivially walk you through building your own self-hosted setups (or even set things up for you if you entrust it with an automation MCP.)
Oracle and AWS and Alphabet and the rest shouldn't profit from eating the internet - the whole world becomes a better place every time you deny them your participation in the endless enshittification of everything.
yet another obsessive take on "cloud is bad and expensive" eh? I think they vastly forget the value of some SaaS offerings in terms of time saving for small companies. running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people. sure if the setup is simple and only requires a few classic components, this is way cheaper and for a 99.9% SLA will work fine. otherwise it only makes sense if you had very large cloud bills and can dedicate multiple engineers to the newly created tasks.
Not agreeing/disagreeing with your core point, but this doesn't seem right:
> running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people.
That's a medium to large homelab worth of stuff, which means it can be run by a single nerd in their spare time.
Homelab =/= Production systems
The gulf between these two insofar as what approach, technologies, and due-diligences are necessary is vast.
I think we've gone a little nuts defining "production system" these days. I've worked for companies with zero-downtime deployments and quite a lot of redundancy for high availability, and for some applications it's definitely worthwhile.
But I think for many (most?) businesses, one nine is just fine. That's perfectly doable by one person, even if you want, say, >=96% uptime, which allows for 350 hours of downtime per year. Even two nines allows for ~88 hours of downtime per year, and one person could manage that without much trouble.
Most businesses aren't global. Downtime outside regular business hours for your timezone (and perhaps one or two zones to the west and east of you) is usually not much of a problem, especially if you're running a small B2B service.
For a small business that runs on 1-3 servers (probably very common!), keeping a hot spare for each server (or perhaps a single server that runs all services in a lower-supported-traffic mode) can be a simple way to keep your uptime high without having to spend too much time or money. And people don't have to completely opt out of the cloud; there are affordable options for e.g. managed RDBMS hosting that can make maintenance and incident response significantly easier and might be a good choice, depending on your needs.
(Source: I'm building a small one-person business that is going to work this way, and I've been doing my research and gaming it out.)
One thing that AWS, Google and Azure do that your own systems don't is release their updates whenever it suits them, often taking down your business down in the middle of the day with their own problems. You can't fix it, you can't rollback what you just did and get back up and running you just have to sit and wait.
That is quite different to a business that turns off its boxes for an hour at 0100 Sunday morning to do updates and release new software. The downtime isn't equivalent because it really matters when it is and if that hurts your use case or not. Your own system might be down for more hours a year than AWS, but its not down Monday to Friday on an evening when you do most your sales because you refuse to touch anything during that period and do all the work outside that and schedule your updates.
It also feels like AWS (or Azure) isn't really that much more reliable than your own thing. But half the internet is down at the same time so you don't get blamed as much.
Its the "No one gets blamed for going IBM" thing in the modern era. They are making it someone elses fault and absolves the blame. The problem is if your competitor is still up you could be loosing customers on average mid day outage, even if they are down for 3x as long its not when it matters.
> running and managing numerous DBs, k8s clusters, ci/cd pipelines and stateless container systems is simply impossible with a team of 1-2 people
Then don't. If your team and budget are small enough not to hire a sysadmin, then your workload is (almost certainly) small enough to fit on one server, one Postgres database, Jenkins or a bash script, and certainly no k8s.
Idiotic piece - the purpose of 'the cloud' is to scale large demand applications. Rental hardware can't really do that.
The post is about that 99% of companies that will never go large scale. Its point is that they don't need cloud, buying a server or two is all they need.
An argument which begins by reducing an entire industry down to a single "purpose" is not convincing.
The vast majority of businesses are not "large demand applications".
> Idiotic piece
That's unnecessary; please don't do that here. Weird that you created an account just to post an unsubstantive comment.