Maximizing some value could well do more bad than good - Though I think maximizing good should always have [[Fallibilism|comprehensive uncertainty]], [[People tend to be bad at staying in uncertainty]]. - It seems people who search for what is good tend to end up with strong belief in some maximand that is actually uncertain (religions, other philosophies). - [[How I maximize good|If there does exist good maximands]], many people's current maximands are most likely flawed. - Extrapolating flawed maximands can cause a lot of bad. - This is the main [[Rogue AI]] worry. We expect superintelligence to be a maximizer ([[The Adolescence of Technology|Though Dario says it might not be]]). And we doubt it will know the right values to maximize, as [[The alignment problem]] is unsolved. Hence we expect superintelligence to maximize some flawed value, many of which seem to result in getting rid of humanity. Even if you can stay in uncertainty, should you? It takes a lot of effort to question everything and be disciplined enough to continuously recognize the many uncertainties there are in all your beliefs. Figuring out roughly what you think is good and then [[Agency|just doing things]] might be a better strategy for doing good. You have to find the right [[Explore exploit]] balance. [^1] [^1]: Karnofsky, Holden. 2022. _EA Is about Maximization, and Maximization Is Perilous_. September 2. [https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous](https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous). [[karnofskyEAMaximizationMaximization2022|Annotations]]