If you're planning to hook 2x 3090, then it's important to choose a proper PSU as well. Choose one which is atleast 1500w. Regarding motherboard, there's no reason to not opt for a b650 board. Just make sure that the VRMs are solid and won't melt away. Lastly, don't fall for the PCIe 5.0 marketing gimmicks. Even the 4090s are not yet fully capable of utilizing the PCIe 5 lane's bandwidth. You could save a lot by choosing a proper motherboard.
Also, could you be a bit more specific about your use case?
1. Will you be training models? If so, what will be your workflow?
2. If you're just gonna try out pre-trained LLMs, what'll be the average size of the model (ex., 13B) you'll be running?
3. Just to be clear, you've already mentioned that it's for local LLMs. But, do you have any plan to self host the models and expose APIs (either to your local network or the internet) so that multiple people can have access to the models at the same time?
It would help if you could post your complete build information of your current setup.