News Microsoft CEO Admits That AI Is Generating Basically No Value

t3chg33k

Innovator
"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet.

 
Basically "A.I." abhi tak A.I. hee nahi bana hai.
( For those who might not get it,
Present so called "a.i." is not 'chappie' yet )
 
  • Like
Reactions: Futureized
Nevertheless, this article by BBC says:
"The Marcellus report also points out that white-collar urban jobs are becoming harder to come by as artificial intelligence automates clerical, secretarial and other routine work. "The number of supervisors employed in manufacturing units [as a percentage of all employed] in India has gone down significantly," it adds."
So, there is some value ?
 
Current AI is just a prediction engine, which is very good at pattern matching and predicting next best letter, word or pixel color etc from the trained data, due to it being so good at it, other properties like reasoning automatically emerged from it. Humans didn't code for it.

And we also don't really know how it works within, we humans can't predict the output of neural network from given input with large parameters and huge number of layers. This is how deep it get's within the layers.
 
Current AI is just a prediction engine, which is very good at pattern matching and predicting next best letter, word or pixel color etc from the trained data, due to it being so good at it, other properties like reasoning automatically emerged from it. Humans didn't code for it.

And we also don't really know how it works within, we humans can't predict the output of neural network from given input with large parameters and huge number of layers. This is how deep it get's within the layers.
Exactly!!!
---------------------------------------
 

Current AI is capable of doing this sort of thing, soon people will be taking benefits of this, but it's equally dangerous in the wrong hands. We are poking in a thing we don't really fully understand,

What do people mean when they say even though we created the AI deep learning, we still don't how it works internally?
When people say that, they mean that while we designed deep learning models and understand their architecture, we often don't fully understand how they arrive at their decisions internally. This is because deep learning models, especially large neural networks, function like black boxes with millions or even billions of parameters (weights and biases).

Here’s why this happens:

Complexity & Scale – Modern neural networks, like GPT or image recognition models, have an astronomical number of connections and interactions between layers. While we can analyze individual weights, the emergent behavior of the entire network is not something we explicitly programmed, it develops through training.

Non-Intuitive Feature Representation – Neural networks don’t operate like human-designed algorithms with clear logical steps. Instead, they learn abstract patterns in data that aren’t always interpretable in human terms. For example, an AI trained for image recognition might classify a dog, but we don’t know which pixel combinations it finds most significant.

Emergent Behaviors – Some AI models develop capabilities that weren’t explicitly intended during training. For instance, large language models trained on predicting text might unexpectedly develop reasoning-like abilities, even though no one specifically designed them for that.

Lack of Transparent Decision Paths – Unlike traditional algorithms where you can follow a clear if-else decision tree, neural networks work through matrix multiplications and nonlinear activations, making it hard to trace how exactly an input transforms into an output.

Adversarial Vulnerability – AI models can sometimes be fooled by seemingly minor, imperceptible changes in input data. Since we don’t fully grasp how they encode information, they remain susceptible to such attacks.

What We’re Doing About It
Explainable AI (XAI): Researchers are developing methods to interpret AI decisions, like feature visualization, saliency maps, and concept activation.

Attention Mechanisms: Some models, like transformers, provide attention scores that show which input parts were most important.

Layerwise Analysis: Researchers analyze individual layers to understand what representations they learn.

In short, we built deep learning models, trained them, and can analyze their performance, but we don’t always fully understand the intricate logic behind their decision-making. It’s like how we understand the human brain’s general principles but still don’t fully grasp consciousness or thought formation.



This is the scary part, when they say people don't know what they are playing with.

Emergent Behaviors – Some AI models develop capabilities that weren’t explicitly intended during training. For instance, large language models trained on predicting text might unexpectedly develop reasoning-like abilities, even though no one specifically designed them for that.



By tickling the AI deep neural network with more and more parameters it can ultimately give humans something which we are not suppose to have by natural law. Like it can self generate some properties, abilities which when detected by humans, humans can use it for doing wrong things in the world, disasters might happen.
 
Last edited:
  • Like
Reactions: k660