News WWDC 2025: Did Apple Just Hit Snooze?

- Ultra summarizing everything. You will lose the ability to think deeply about anything and your thoughts will be devoid of any nuance
Critical thinking has really taken a back seat. I know of and have seen youngsters who are planning random ass life changing decisions using AI, without any sort of manual research or analytical thinking.
 
  • Sad
Reactions: mr.stark
Bad:
- Trying to offload mental work completely to it. This is a recipe to make yourself stupid. Remember that your brain is kinda like a muscle, if you don't use it, you lose the muscle
- Ultra summarizing everything. You will lose the ability to think deeply about anything and your thoughts will be devoid of any nuance
Thanks for highlighting. These are very very important points that cannot be overstressed. Use them to your advantage instead of training them to replace you.
 
I work with a company that helps train LLMs. We use these chatbots on a daily basis. I already see changes in thinking patterns in some of my trainees that joined as little as 4 months ago. Their first instinct when they come across anything barely out of their comfort zone is to just ask AI to do it for them. Can’t help but wonder how it’s gonna be a few years down the line.

Don’t get me wrong. The tech really intrigues me. It’s cool. It’s exciting. I wouldn’t be in this field otherwise. But there is no question that the way people are using AI results in brain rot long term.
 
I work with a company that helps train LLMs. We use these chatbots on a daily basis. I already see changes in thinking patterns in some of my trainees that joined as little as 4 months ago. Their first instinct when they come across anything barely out of their comfort zone is to just ask AI to do it for them. Can’t help but wonder how it’s gonna be a few years down the line.

Don’t get me wrong. The tech really intrigues me. It’s cool. It’s exciting. I wouldn’t be in this field otherwise. But there is no question that they way people are using AI results in brain rot long term.
Yes makes me selfishly happy that I have already spent last 40 years of my life training my brain to operate without AI. Those who wont give their brains time and effort to train without AI will just end up training AI.
 
I work with a company that helps train LLMs. We use these chatbots on a daily basis. I already see changes in thinking patterns in some of my trainees that joined as little as 4 months ago. Their first instinct when they come across anything barely out of their comfort zone is to just ask AI to do it for them. Can’t help but wonder how it’s gonna be a few years down the line.

Don’t get me wrong. The tech really intrigues me. It’s cool. It’s exciting. I wouldn’t be in this field otherwise. But there is no question that the way people are using AI results in brain rot long term.
Nowadays Gen Z is using AI so much that they are not using their brains at all. Everything is done by the phones either via translating things or ChatGPTing the homework.
In our days during the 90s it's all hard work done by us and if anything goes wrong we used to write 10 more times as punishment. Nowadays kids are just taking printouts.
 
  • Like
Reactions: animishticomb
  • Maybe they should first fix the irritating animation delay on iphone just for UPI payment in rush. or faceid animations.
  • Or fix the irritating vertical bounce/scroll effect in Safari.
  • Or fix horrendous wait for tap-hold-popup-submenus just to select some text . Tap multiple times for the popup.
  • Or an actually useful text autocorrection.
  • Or an actual uppercase lock on keyboard.
  • Or Number keys and symbols on the IRRITATING keyboard.
  • Or even make useless Apple Books & Apple Pay available in india.

  • Then i will buy a Dull colorless iphone PRO to find friends who actually even use iMessage and facetime.

  • God i miss the "Developer Mode"
 
  • Like
Reactions: animishticomb
My fascination with LLMs started to fade when they dumbed it down for the o1 release and now it's completely died out after I understood that they were just predictive text generators relying on clever math. That kind of burst the bubble/illusion for me.

Serendipitously, Google results started being usable again so hey.
 
My fascination with LLMs started to fade when they dumbed it down for the o1 release and now it's completely died out after I understood that they were just predictive text generators relying on clever math. That kind of burst the bubble/illusion for me.

Serendipitously, Google results started being usable again so hey.
The issue is that the same predictor ends up causing inaccuracies based on probability. You have to simply fact check everything coming out of an LLM model if you want to use it for anything other than social media.

This is okay for people working at lower rungs in an organisation but comes with huge accountability at higher levels. It can speed up code generation that was being done by freshers but significantly increases work for the experienced peer programmer.

It helps people who are completely unaware about the topic to understand something but becomes a problem when people take it to be the expert, which it is not because it is making things up based on content from actual experts.
 
  • Like
Reactions: nitin_g3 and rsaeon
AI excels at recognizing patterns. With just a few inputs, it can often generate meaningful outputs. It interprets information based on probability, finding structure and relevance in places where humans might only see noise or randomness. When fed large amounts of images or video, it begins to detect patterns which we can't see or make sense of.

Of course, the effectiveness of AI heavily depends on the quality and accuracy of the data it was trained on. When asked to recognize something outside of its original training data, it makes predictions based on what it has learned, guided by probability. Sometimes it’s right, sometimes wrong, and sometimes partially correct. But in any case, it's doing something often better than nothing at all.

We don't understand it completely. What is happening in the deep layers. Top people are working on it. The day we are able to predict a model’s output character by character before giving prompt, that’s the day we will have truly demystified it.
 
  • Like
Reactions: rsaeon
AI excels at recognizing patterns. With just a few inputs, it can often generate meaningful outputs. It interprets information based on probability, finding structure and relevance in places where humans might only see noise or randomness. When fed large amounts of images or video, it begins to detect patterns which we can't see or make sense of.

Of course, the effectiveness of AI heavily depends on the quality and accuracy of the data it was trained on. When asked to recognize something outside of its original training data, it makes predictions based on what it has learned, guided by probability. Sometimes it’s right, sometimes wrong, and sometimes partially correct. But in any case, it's doing something often better than nothing at all.

We don't understand it completely. What is happening in the deep layers. Top people are working on it. The day we are able to predict a model’s output character by character before giving prompt, that’s the day we will have truly demystified it.
It will always go the other way. As you move towards AGI, you are looking more to imitate a human. As you do that, you are looking at more abstract reasoning which means AI will keep becoming more of a black box as it advances.
 
  • Like
Reactions: Heisen
It will always go the other way. As you move towards AGI, you are looking more to imitate a human. As you do that, you are looking at more abstract reasoning which means AI will keep becoming more of a black box as it advances.
Gives me the chills, humans turned rocks into thinking machines so complex, that our brains are not able to understand it how it works. Imagine the human mind itself, we will never truly understand how/why it works.
 
  • Like
Reactions: t3chg33k
Gives me the chills, humans turned rocks into thinking machines so complex, that our brains are not able to understand it how it works. Imagine the human mind itself, we will never truly understand how/why it works.
Guess we will have to start taking the Voight-Kampff test soon instead of Captcha.
 
My fascination with LLMs started to fade when they dumbed it down for the o1 release and now it's completely died out after I understood that they were just predictive text generators relying on clever math. That kind of burst the bubble/illusion for me.

Serendipitously, Google results started being usable again so hey.
at its base level, all AI is just maths, its honestly incredible how the forerunners of AI even thought of this approach, and you'll fall even more in love with it, if you even do a surface level dive behind its tech. I may make fun of AI/LLMs but the research behind it for me is awe inspiring,
 
My fascination with LLMs started to fade when they dumbed it down for the o1 release and now it's completely died out after I understood that they were just predictive text generators relying on clever math. That kind of burst the bubble/illusion for me.
What if YOU are a next word predictor? What if that's how speech and thoughts are constructed in our minds?

I'm jk, but it's interesting to think about.


We don't understand it completely. What is happening in the deep layers. Top people are working on it. The day we are able to predict a model’s output character by character before giving prompt, that’s the day we will have truly demystified it.

I don't think it's a black box in the sense you're portraying it. We can reasonably predict a model's output if we keep the temperature (an inference parameter, not room temperature) extremely low. It directly affects how random and varied the output is going to be from run to run. So, with low temperatures, multiple runs with the same prompt would lead to similar outputs. In essence, you can predict the output of the next run with the same prompt using the current run -- which doesn't sound like much of an achievement. But if you were able to predict the output character by character, you wouldn't need the model.

The black box nature of it IMO comes from a lack of understanding of how the neural network internalizes various abstract concepts and why we see various emergent properties as a result of the training process. One example of this emergence is the ability to translate across languages even if an LLM was not specifically trained on translation input-output data.