A lot of folks I talk to are trying to start projects, but don’t really know how to get started or what’s involved. A comprehensive guide for creating applications is far too long for a post like this, but I thought I’d step back and explain the meta-process that I go through to get a project running.

Assumption: Software sucks.

Here’s something that engineers and 85 year old grandmas have in common: they know that software is always broken. Computers, when given the choice, will typically do the dumbest thing that is possible to do. Software engineering is therefore an exercise in defining very precise rules for where the electrons can go, and it’s still almost impossible to ship software that doesn’t end up breaking catastrophically in some way.

Don’t believe me? Pick an application. Open it, and see how long you last before something goes wrong. Most software won’t make it obvious that it screwed up - it’ll do things like make you repeat whatever you just did, or force you to click in a slightly different spot. And I’m sure many of you have acquired a Stockholm-syndrome-like acceptance that in real life if you use a fork it will always poke things and a hanger will always hang things but sometimes if you click on the Chrome button it’ll just bounce and then do absolutely nothing while gazing at you with its one weird eyeball.

Developer environment

Okay, so let’s say you’re ready to create something. For now I’ll assume that it’s a web application, but these steps are the same, if not worse, if you’re developing for Windows, iPhones, etc.

The first problem is that the only way to create software is by using software, which means anytime you hit problems, you’re never sure if your particular issue is related to the software you’re building or the software *you’re using. *This issue comes up most commonly when you first start configuring your machine so you can write code effectively.

While it’s theoretically possible to write code by opening up your version of Notepad and typing a few things, no one does it that way anymore. The typical steps are something like:

  1. Set up a “sandbox” in which you can install all the dependencies that your code will rely on.
  2. Install a bunch of dependencies into that sandbox.
  3. Configure some type of program that helps you write code.
  4. Start writing code in such a way that it relies on the sandbox’ed stuff in the code-writing program of choice.
  5. Try running the code

Note that steps 1-3 are not directly related to the software you want to build, and in a non-trivial number of my own projects (probably 90% of them when I was just starting out, but as much as 10% now) I end up hitting bugs or flaws in the process such that I don’t actually get to step 4 even after prolonged work.

For example, python ships with at least 6 different ways of creating “virtual environments” for sandboxing. That’s ignoring a couple of other ways to do it, including what happens to be the most popular way today. Each approach you take has quirks / bugs / issues which are often but not always addressed by various StackOverflow posts, documentation, or personal debugging.

Beginners constantly trip on this - they need to fix code written by other people before they’ve even written lines of their own. It doesn’t help that error messages are usually terrible or not indicative of the specific problem. Many of the errors are actually from other applications you’ve installed, for instance.

Even if beginners do end up finding solutions to these issues, their fixes are often slightly incorrect. They end up in a brittle state and don’t understand why their code falls over constantly when no one else’s seems to.

Effectively novices to programming are like novices to cars - looking under the hood is a scary thing and even if you get from point A to B you don’t really understand what’s happening from first principles. When I talk to experts, this problem doesn’t get better - if anything it gets worse. At least in cars there is some level at which you can trust that the thing you’re using (like a solid steel beam) is probably going to work like most steel beams and be pretty reliable, but there is almost no level of abstraction in software where you can have the same confidence.

You would think, of course, that rather than give everyone a broken down jalopy to start with, we could just give everyone working cars and let them start coding - but that’s a pipedream and it doesn’t exist. Every time a talented software engineer tries to make a better starting car it just transmogrifies into some weird thing for everyone else, which sometimes has more problems than if the engineer had done nothing at all. Or it’s some kind of toy which can’t be used to build real stuff. You only have to work at a large company a few weeks and see how they set up their dev environment (which, remember, they have a monetary incentive to make easy) to realize how broken this is.

Programmers seem to regard this as some kind of law of nature, in that each one is willing to jump through all kinds of ridiculous hoops just to get the first line of code up and running. “Hello World” is the famed starting place for many programs, but the real thing it’s measuring is whether you’re set up correctly.

This is also why many smart engineers are pathologically afraid of software that solves technically straightforward but societally importantproblems. It’s just too easy for it to be broken, sometimes by hackers but usually because software is always broken in some way, and all you can do is try to reduce the downside risk of breakage or add something more reliable to guarantee safety.

There are no obvious solutions, but my approach is to use as little of other people’s software as possible. For instance, I set up the minimal version of a sandbox that I know works and is straightforward for me to debug. I use a package manager that I’ve used 100 times before, and I only install libraries which a sufficient number of people have vetted and run themselves. It’s only in this manner that I can cut the number of workflow breaking experiences to 10%, but it’s always a close call.

You would think that this would be a common sense approach, but it actually isn’t. It’s apparently normal for folks to install a whole bunch of crap they don’t really use, because that’s what the tutorial says, and then find that they’re stuck debugging code that’s useless for their project. Just to pick a specific example, I load javascript files from a CDN, because it’s both faster and easier to debug, but so many people have been trapped in NPM dependency management that it’s become a joke.

Writing your program

When it comes to writing your code you face a new problem: there’s precious few practical tutorials about how to build something real. Instead, what exists on the internet today are predominantly lessons on how to build 1) toy programs, usually built in a weird way to make them seem simple or trivial, but rapidly break down when complexity is added or 2) extraordinarily complex programs, usually built in a weird way to support dozens of programmers at large organizations. Both of these are basically useless, since what you’re building is something in between. It’s not *just *a todo list app, it might include user management, relatively complex calculations etc., but it definitely isn’t some gargantuan project that needs “big” infrastructure.  

By the way, it’s possible that better tutorials exist, but it’s impossible to find them. I generally discover links using an aggregator (Reddit, HN, Lobsters, etc) or Google. Effectively both surface the most popular articles measured by either votes or links, but since the largest population of engineers are either 1) trying to play with a new thing or 2) working at a big company, the articles you get mostly cater to those two groups. Anything else you get is some flavor of SEO spam (or as some call it, content marketing)- shallow writing that doesn’t actually talk about anything, with the hope that search engines will rank it higher. Since the keywords you use will often be similar to the more popular stuff, there isn’t really a way for Google to know that it’s doing a bad job.

I suspect that Goodhart’s law has weakened PageRank (or whatever special sauce Google uses) and probably is an opening for a new approach to search that doesn’t incentivize writing and linking lots of random garbage.

So you’ve built your project - now what? Now you have to deploy it.

Putting bits on the internet

There exist a variety of tools to deploy your software to the world, but again we see a hole in the market (or at least we saw one historically) for the startup founder / small company builder, who doesn’t want to spend a ton of time on dev ops using AWS but needs more than a rudimentary shared hosting setup. Sudopoint is deployed on Heroku, for example, but at one point was deployed on Webfaction, PythonAnywhere, and AWS, mostly to general frustration.

The main problem is that the core skill set for building an application is usually different from deploying it. In Sudopoint’s case, static assets (like CSS or JS) need to be served differently than the rest of the application, the production database is slightly different, environment variables are changed, etc. If I’m at all representative of the general programming population, there are an order of magnitude more projects that work perfectly well sitting on my laptop, but I didn’t bother to deploy to the world because of the pain / annoyance involved. As usual, these projects have now gone stale and it’s unclear if they could ever be deployed, just a sunk cost of learning to build things.

The only suggestion I have here is to start with a skeleton project that is known for being deployed easily to one of these services, and then rewrite / adjust your application to fit the template. An apt metaphor is removing an appendage because your coat doesn’t have enough sleeves, but it’s often easier to change what you understand than to try to change what you don’t.

Optimization

When you first deploy your app, it’s going to be slow. Not slow like the way gmail is slow, where it’s obviously doing a whole bunch of stuff and your browser just can’t keep up for a couple seconds. By slow I mean, you have a little “Hello World' page with almost nothing in it, and it takes 15 seconds to load on your fiber connection.

Most computer scientists understand Big O notation. It’s how we measure what types of algorithms are likely to be slower than others. Unfortunately in matters of web applications (and indeed, for most applications today) Big O notation is not only useless, but worse, because by paying attention to it you’ll optimize the wrong things, and as we know, premature optimization is the root of all evil.

What do I mean? At Sudopoint, we were seeing slow load times, and the reason had nothing to do with the complexity of the computations we were doing, at least at the code level we were working at. Instead, in order, the problems were:

  1. Database inserts and reads without indexes
  2. Unoptimized image, CSS, and JS assets
  3. Using SQLite instead of Postgres
  4. Shared hosting instead of more dedicated hosting
  5. Lacking a CDN

From my research, our application was not unique in this respect. Most of these were ascertained with a profiler or with something like webpagetest but none of them would have been caught with “classic” computer science training. Instead it was mostly a matter of putting lots of print statements and simply watching the machine as it went along, and see where it spent a lot of unnecessary time.

That’s the main reason why you should eschew complexity in what you’re building. It’s not that getting something working using complex architecture is impossible - it’ll just take longer. The problem is that if you don’t understand every part of the architecture you’re using, you won’t be able to “fly alongside the wave” to figure out what it’s doing.