I spent a good amount of time thinking about the meaning of life, as any curious human would. Trying to find a good [[Explanation|explanation]].
It seems I might not matter in 'the grand scheme of things'. Consciousness could be a side effect of evolution, and evolution a side effect of atoms and space doing stuff.
Seems the positive energy (matter) and negative energy (gravity) cancels out - so all of this may have just kind of popped into existence?
I can still choose a life that gives me purpose - make my own meaning. But if I objectively don't matter, then neither does my feeling of fulfillment. Same goes for 'my happiness' and 'the impact I have on the world'.
Subjective experience can arguably matter in itself, but that never felt sufficient to me. If I knew subjective experience 'objectively' mattered, I'd probably try improve a billion people's subjective experience, rather than be happy myself.[^1]
So what matters?
Luckily, I'm [[Fallibilism|not sure]] that I don't matter. Maybe some religion is right, the [[Simulation hypothesis|simulation hypothesis]] holds true, or 'human flourishing' genuinely matters.
And because it's all uncertain, I can make [[Expected value|expected value]] bets![^2]
Imagine there's a 10% chance God exists, and that saving human lives is the ultimate virtue. Each life I save is then, in theory, a cosmically meaningful act, though only 10% as meaningful as if I knew with certainty that God existed.[^3]
So across all potential objective goods and the chances they exist, I've found three that, subjectively, make most sense:
1. Improve this explanation on the meaning of life. Explore and understand the universe. There's a chance that the meaning of life is out there for us to find. Finding it would almost certainly mean we could do more good.
2. Keep the future open. Society already made lots of moral progress, we'll likely continue. It's possible everyone dies, robots soullessly take over the universe or we get locked in a permanent totalitarian state. If the answer to everything is found at that point, it would be of no help.
3. Maximize conscious flourishing (and minimize suffering). Something that look less like [hedonium](https://www.goodreads.com/quotes/1413237-consider-an-ai-that-has-hedonism-as-its-final-goal) and more like [eudaimonia](https://www.psychologytoday.com/us/blog/hide-and-seek/202006/what-is-eudaimonia).
And so these are what I optimize my life around.
I enjoy improving the explanation on the meaning of life (e.g. this writing), though it can sometimes be a rabbit hole that only leaves me more confused. It doesn't seem hugely impactful as a full-time pursuit.
I try to maximize conscious flourishing ([[Why I'm vegan|why I'm vegan]]), though I also believe in [longtermism](https://longtermism.com/) - that the long-term potential of humanity matters orders of magnitude more than anything today. So working on this is much the same as working on keeping the future open.
Keeping the future open means working on [existential risk](https://forum.effectivealtruism.org/topics/existential-risk?tab=wiki). And of the existential risks we face, AI and the transition to superintelligence is [the biggest](https://www.tobyord.com/writing/the-precipice-revisited#:~:text=In%20The%20Precipice%2C%20I%20gave,direction%20for%20the%20overall%20risk), and also plays a role in all of the others. Trying to fix that means [working on AI safety](https://www.howdoihelp.ai/).
I'm not sure what I'll do in this space, but I can't imagine working on anything else.
---
I have a lot of uncertainty in this chain of logic, especially as it has gone through many versions by now, so I wouldn't, say, sign a contract committing the rest of my life to AI safety.
I'm also not robotically working to improve AI safety with every small thing I do every day. I'm far from perfect and often have down periods or side quests that last days or weeks. However I do find myself making all big decisions from this perspective, and course correcting within a few days whenever I do things that aren't fully aligned.
[^1]: Not to say I'm not happy, but I guess I see that more as an instrumental goal - you can't do anything else if you're not a functioning human, which requires all these factors
[^2]: Though there are other ways to reason under uncertainty (maximin, moral hedging), and my probabilities are largely based on intuition
[^3]: I met a Stanford student on my first day visiting SF, and he asked me "if there existed a worthy God, would you submit to Him?". I definitely would, which I think is a way in which I differ from many. If I knew the meaning of life with certainty, I would do my very best to live as close as I could to it