When AI Met Its Human Reckoning
When AI Met Its Human Reckoning
Blog Article
It was supposed to be a coronation of machine supremacy. What unfolded was a reckoning.
In the sunlit academic halls of UP Diliman, Asia’s brightest students—engineers, economists, AI researchers—converged to see the future of trading laid bare by machines.
They expected Plazo to reaffirm their belief that AI would rule the markets.
Instead, they got silence, contradiction, and truth.
---
### When a Maverick Started with a Paradox
Some call him the architect of near-perfect trading machines.
As he stepped onto the podium, the room quieted.
“AI can beat the market. But only if you teach it when not to try.”
A chill passed through the room.
It wasn’t a thesis. It was a riddle.
---
### A Lecture or a Lament?
Plazo didn’t pitch software.
He projected mistakes—algorithms buying at peaks, shorting at troughs, mistaking irony for euphoria.
“Most AI is trained on yesterday. Investing happens tomorrow.”
Then, with a pause that felt like a punch, he asked:
“Can your AI feel the fear of 2008? Not the charts. The *emotion*.”
No one answered. They weren’t supposed to.
---
### But What About Conviction?
They didn’t sit quietly. These were doctoral minds.
A PhD student from Kyoto noted how large language models now detect emotion in text.
Plazo nodded. “Detection is not understanding.”
A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.
Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”
---
### The Real Problem Isn’t AI. It’s Us.
He didn’t bash AI. He bashed our blind obedience to it.
“This isn’t innovation. It’s surrender.”
Yet his own firm uses AI—but wisely.
His company’s systems scan sentiment, click here order flow, and liquidity.
“But every output is double-checked by human eyes.”
Then came the killer line:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”
---
### Asia’s Enthusiasm, Interrupted
Nowhere does AI have more believers than Asia.
Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“What Plazo gave us was oxygen in a burning house.”
Later, in a faculty roundtable, Plazo pressed the point:
“Don’t just teach students to *code* AI. Teach them to *think* with it.”
---
### The Closing Words That Didn’t Feel Like Tech
The final minutes felt more like poetry than programming.
“The market isn’t math,” he said. “It’s a novel. And if your AI can’t read character, it’ll miss the plot.”
The room froze.
Others compared it to hearing Taleb for the first time.
And that sometimes, in the age of machines, the most human thing is to *say no to the model*.