wondering how to use A* search with gradient descent given that it requires a heuristic of how far a partial solution is from the goal 🤔
an admissible heuristic, which is to say an underestimate of the distance remaining to the goal, where the more accurate the heuristic the better the search performs, but the search breaks if the heuristic ever overestimates because that can cause it to neglect better solutions.
the cool thing about optimisation by gradient descent is that it tells you exactly which direction to go in order to reach a goal whose exact location is unknown, while A* search tells you which direction to go to reach a goal whose exact location is known but cannot necessarily be reached by heading straight towards it.
then genetic optimisation can be applied when you have an unknown goal and a nondifferentiable cost function, so you don’t know what you’re looking for nor which direction it must be in, which is obviously not a great situation to be in.
gradient descent has always kind of bugged me for all but the simplest problems, because not everything is differentiable and back propagation is annoying and numerical futzing around is unsatisfying but the most frustrating part is that it works!
but most neural network advances seem to come from running this insanely powerful optimisation process on a network architecture that was dreamed up by a shaman behind a waterfall operating on mushrooms and intuition; I want the computers to be doing this part of the job too, but you can’t differentiate over all possible networks, can you… can you?
You cannot. You may convince yourself of this by simply getting too creative in your choice of activation function.
then again I guess why limit yourself to networks, they’re just one slice of the space of possible functions after all.
You tryna start a fight rn or something?
hmm now I’m wondering what’s the most annoying possible activation function, one that allows a network to approximate any possible function in principle yet makes the network impossible to train in practice…