Microsoft, with its new Bing (or is it Sydney? ), Google, with Bard, and OpenAI, with ChatGPT, are making formerly restricted AI chatbot technologies more accessible to the general public.
How do these LLM (Large Language Model) applications function? OpenAI’s GPT-3 informed us that AI used “a succession of autocomplete-like programmes to learn language” and that these programmes assess “the statistical features of the language” to “make educated guesses based on the words you’ve previously typed.”
Or, in the words of human James Vincent: “These AI technologies are enormous autocomplete systems that are trained to guess the next word in any given sentence. As a result, they have no hard-coded database of ‘facts’ to draw from, only the capacity to compose statements that sound reasonable. This indicates that individuals have a tendency to offer misleading information as fact, as the plausibility of a sentence does not ensure its veracity.”
But there are so many more components to the AI landscape that are coming into play — and there will be issues — and you can be certain to see it all develop on The Verge.