[[Removing LLM safety is easier than adding it]]. Seems it can cost be between $0.1 to $200 to remove almost most safety features of frontier models, through [[Abliteration]] or fine-tuning (Qi et al. 2023) The fine-tuned version is just a file, and will likely be uploaded online by someone, so the fine-tuning process can be a one-time thing, not per-user. It is almost certain, if a powerful open weights model is released, some provider would create inference as a service on an unfiltered version of the model within weeks. Examples of this exist - [Dolphin 3](https://ollama.com/library/dolphin3) and [Venice Uncensored](https://venice.ai/uncensored) both created from Mistral The barrier to entry for an average person is likely just signing up to a free tier of some "uncensored AI" app, which can be surfaced from a Google search, or when asking another AI for it.