It’s critical to remember exactly what artificial intelligence is. Like all useful tools within our modern-day society, AI is an amplification of human strength — representing, in this case, strength of the mind. As such, it is common that we mistake AI for an attempt to replicate that mind. There is much about our ability to think and to reason that is going to take a long time to recreate in a machine — but more importantly calls into question why we would want to in the first place. What economic value is there in replicating the human mind in a machine?….
As we move deeper into deductive reasoning, we’re at the dawn of a new world where machines will produce new information unlike ever before. Rather than just being able to ask simple questions like, “how tall is Mount Everest?” we can start asking questions such as “is there a correlation between company growth and length of customer contact?” In both of these cases, the inferences have to be coupled with the main elements of the question itself, and we need to know how much confidence there is in the answer.
With the Everest example, when the algorithm looks up how tall Everest is, it will probably check across multiple sources and will give you an answer, the level of confidence in that result and the source of the information, which could be as simple as an identifying document. If the algorithm finds identical answers across multiple reputable sources, that confidence score would be high. But if there is conflicting information with few sources citing the same height, the confidence score will be low. This will come into play more frequently as AI is increasingly used in deductive reasoning. By referencing the source or knowledge, it enables people to both test the veracity of the conclusion and to adopt the conclusion of the system to their own understanding of the problem.
h/t @FullArtIntel
For other posts on AI and ethics, see here.
Leave A Comment