I first considered aspects of artificial intelligence (AI) in the 1980s while working for General Dynamics as an Avionics Systems Engineer on the F-16. Over the following 3 decades, I continued to follow the concept until I made a realization - AI is just an algorithm. Certainly the goals of AI will one day be reached, but the manifestation metric of AI is not well defined.
Consider the Denver International Airport. The baggage handling system was state of the art, touted as AI based and caused the delay of the opening by 16 months and cost $560M to fix. (more - click here) In the end, the entire system was replaced with a more stable system based not on a learning or deductive system, but upon much more basic routing and planning algorithms which may be deterministically designed and tested.
Consider the Houston traffic light system. Mayors have been elected on the promise to apply state of the art computer intelligence. Interconnected traffic lights, traffic prediction, automatic traffic redirection. Yet the AI desired results in identifiable computer algorithms with definitive behavior and expectations. Certainly an improvement, but not a thinking machine. The closest thing to automation is the remote triggering features used by the commuter rail and emergency vehicles.
So algorithms form the basis for computer advancement. And these algorithms may be applied with human interaction to learn the new lessons so necessary to achieving behavioral improvement with the computers. Toward this objective, distinct fields of study are untangling interrelated elements - clustering, neural networks, case based reasoning, and predictive analytics are just a few.
When AI can be achieved, it will be revolutionary. But until that time, deterministic algorithms, data mining, and predictive analytics will be at the core of qualitative and quantitative advancement.