Intelligence is notoriously difficult to define, but when most people (included computer scientists) think about it, they interpret it on the model of human intelligence: an information processing capacity that allows an auto agent to act on the world.
But Michael I. Jordan, Pehong Chen excellent professor of both computer science and statistics departments at the University of California, Berkeley and a distinction Amazon Scholar, believes it is for a narrow intelligence concept.
“Swarms of ants are intelligent in the sense that they can build ant hills and share food, even if each mur does not think about hills or sharing,” says Jordan. “Economists have taken this perspective on their focus on the tasks performed by markets. Performing these tasks is defining a reflection of intelligence. A market that brings food says, New York is an intelligent device every day. Important to remember that a loosely linked collection of neurons, each performer is relatively simple functions.
Jordan claims that distributed, social intelligence is better to fit in meeting human needs than the type of autonomous general intelligence we associate with Terminator films or Marvel’s Ultron. In the same way, he says, AIS goals must be formulated at the level of the collective, not the level of the individual agent.
“A good engineer has to think about the overall goal of the system you are building,” says Jordan. “If your overall goal is diffuse – create intelligence and somehow it will solve problems – that’s not good enough.
“What machine learning and network data do is bring people together in new ways of sharing data, to share services with each other and create new kinds of markets, new kinds of social collective. Building systems such as a very reasonable technical goal. Real-Wallld Domans such as transport, trade, health care.
New signals
At this year’s International Conference on Acoustics, Speech and Signal Treatment (ICASSP), Jordan will elaborate on these ideas in a plenary speech entitled “An alternative view of AI: Collaborative learning, incentive and social welfare”. ICASSP may seem like a strange place for such an expansive speech, but Jordan claims – again – it’s only if you are addicted to an overly limited definition.
“You can make signal processing very narrow, and then it’s how do you do compression, how do you get footage with high faith, and so on,” he says. “But these are all the technical challenges of the past. On new domains, the notion of what constitutes a signal, wider. Signals often come from humans, and they often have semantic content. Furthermore, when people interact with a financial relationship in mind, they signal to each other in different ways: What are I willing to pay for this?
“So part of the story here is to say, hello, signal processing people, it’s not just about the data and algorithms and statistics. It’s about a broad design of signals. Signaling is not just about the procedure and streaming of bit, but about these and what market forces they can start.
Statistical contract theory
One of the tools that Jordan and his Berkeley research group use to make markets more intelligent is what they call statistical contract theory. Classic contract theory examines markets with information asymmetries: For example, a seller does not know how potential buyers appreciate a certain good, but the buyers themselves do.
Michael I. Jordan on one, statistical contract theory and prediction -driven inference.
The goal is to motto a menu of contracts that balance the asymmetries. An example is seats on layered class on aircraft: Some customers will contract to pay higher prices for more space and better food; Some customers will contract to waive these benefits in exchange for lower prices. The seller does not need to know in advance which population is; The populations are self -selecting.
In statistical contract theory, Jordan explains, the contracts have statistically embedded in them. The example he likes to use is the drug approval process.
“The legal agency’s job is to decide which drugs are going on the market,” says Jordan. “And that is partly has a statistical problem: You have a drug candidate, and it may or may not be effective on human improvement.
“The problem is that there are more players in this game. The drug candidates are not just coming from the agency themselves. There are these third agents who are the pharmaceutical companies that generate drug candidates. Which would be far to test.
“The agency has no idea that a candidate is good or bad before they run their clinical trials. But the pharmaceutical company knows a little more. They know how to develop the candidates and maybe they have some internal tests. Pharmaceutical company, hey, is that the candidate is good or not?
“The solution is something we call statistical contract theory and hopefully it will appear as a new field. The mathematical ingredients are again menus with options, including licensing fees, duration of licenses, sizes of the experiments and so on. And each drug company gets any possible drug.
“In the selection process, the drug company is pleased.
Prediction -driven inference
Another tool that Jordan’s group has developed is called prediction paddle inference.
“How do I use neural nets not only to make good predictions, but to make good confidence intervals?” Says Jordan. “The problem is that even if these predictions are very accurate, they still make big mistakes in some cases, and these can conspire to give partial confidence intervals. We have this new technique called prediction -driven inference that solves this problem.
“Classic bias correction would be just that I estimate the bias and I correct the original estimate for bias to get a more biild estimator. What we do is different. We do not estimate bias but a confidence interval on all the possible bias. Original value to get interval confidence in the true parameter.