TipRoast
The years teach much which the days never know.
The over-used term "outside-the-box thinking" is one way that "falliabilities" can be expressed - I was in the library the other day, and as I was walking through the stacks I saw a book by E.L. Doctorow ("The March").our building automation systems mgr and i just had a discussion on this. he thinks a.i. will beat the brain someday. mainly because a.i. machines will design a.i. machines, and they will get better. i maintained that i agree they will be more "efficient" and a lot of times more "accurate" but sometimes the very falliabilities of the brain lead to amazing discoveries.
I had recently read an article by Cory Doctorow (no relation), and probably would not have noticed that book were it not for the name association (a form of recency bias). So I borrowed that book and am currently reading it - I'll post a summary in the Five Books thread when I've finished it.
Would an AI/ML algorithm that was trained in a specific problem domain that included material generated by Cory Doctorow (a science fiction author) branch out from that to look at the work of E.L. Doctorow?
I doubt it, unless the specific AI was trying to duplicate the thought-process of TipRoast (which probably can be done with a few lines of Python).