Machine Learning: Present to Future

Many readers probably have some notion of the essential concept of what a “General Artificial Intelligence” (GAI) would be simply through pop culture and pop science exposure. A “GAI” would be capable of the type of problem solving that has eluded but intrigued computer scientists since before the inception of industrial computing; the very human ability to simply look at any problem and through trial and error and experimentation, simply “work things out”.

This, rather obviously, is not here yet. But billions of dollars have been poured into development of focused AI systems. Early chess computers took on humans and won, IBM’s Watson took on the preeminent “Jeopardy” champion, and within the past few years, the worlds most logically complex game, Go, also went to a machine.

These types of programs function on basically the same principles as any other piece of software, with the critical difference in that they have a limited ability to “learn”. And they are most certainly in use today.

IBM provides a long list of the current platforms Watson has been working on. The list ranges from using the Watson IOT to analyze billions of pieces of elevator data to tumor DNA to millions of pieces of available research. Some broad scope facial recognition software is also written with an AI environment to assist.

The applications in the future are difficult to predict, but if we assume any improvement at all over time in AI, it is fair to assume that it will penetrate many, many industries just as the IOT has already and continues to do.