Realistically, anything is musical synthesis if you can make it output a wave that oscillates in the audible spectrum.
Make a bot that uses the rate at which people tweet in realtime to make a signal oscillate and you get a synthesizer. Use a solar panel and use the amount of sunlight it receives as a signal you send to speakers, and you get a synthesizer.
The “types of synthesis” thing is mostly about how you interact with the system to sculpt the sound. It defines workflow, not really the type of sound. It is simply that each workflow will naturally encourage some sounds to come out. This is why it’s so exciting to see synthesizers come up with new workflows and interfaces.
Of course each synth has a character and everything, but the big difference between two synths is how it makes you navigate the sound design phase.