Maybe just me, but i am getting tired of AI generated text. Not talking about this thread or the series, but in general across forums. You get a mountain of text to read through, with potentially faulty info. Sometimes its not even disclosed that this was AI generated and i dont feel like putting effort to read or reply to something that was generated with low effort. Happened once in another forum. Some oversmart dummy was just replying without even understanding. Waste of time. Again, not taking about this one, i just scanned it and did not bother to read much.
Anyway - Have heard a few times that Galax isn’t honoring warranty in last few years. I wouldn’t touch them.
The articles on TE are summarized based on the discussions that has happened on TE itself. So it may not reflect the content created on youtube or reddit.
Would encourage more and more people to share their experiences here on TE so that they get updated in the guides.
Update Notes:
Service quality data based on 2023-2025 forum experiences
City tier classifications accurate as of January 2026
Brand service networks verified through official distributor websites
RMA timelines represent median of reported experiences, not guarantees
Semi-annual updates planned to track service quality changes
It’s a burden to read if you’re not interested in reading it. There’s something about sentence structure and tone that feels not just artificial, but insincere.
Summaries are fine, I use AI for that nearly every hour. Just pop in a YouTube transcript, ask for a summary, and I have a 27 minute video condensed down to 2 minutes of reading.
But like on reddit? I glance over the paragraph structure, the bolding, italicising, indenting — and I skip to the end. Almost always they say they used Claude or ChatGPT to write it and I suddenly don’t care at all about reading it.
The way people are using it is by feeding AI a low effort post, asking it to make it sound good, and then agreeing with whatever the AI spits out.
They’re not exerting control, they’re not editing, they’re have no interest in taking ownership of the process. It’s like hiring a maid to speak on your behalf while your brain daydreams about the colour beige.
Take for example this, I asked chatgpt to come up with a gen-alpha-esque roast:
It took almost 15 minutes of tweaking and refining and I still didn’t like the word flow, but I went ahead and posted it before context was lost in that thread.
It generated text in what it thought was gen alpha flow, but it just reads as a confused millenial that fell into the vocabulary of a ineloquent gen z person.
Still funny, but tiresome to read more than a couple of sentences.
The problem with AI generated text is that most people aren’t invested in what it’s saying.
It can be used properly like here:
ChatGPT did a fantastic job in understanding what the FM was trying to say and it’s because the source text was earnest in what they were trying to convey.
Compare with this excerpt:
What do you mean “Why This Hurts”!?
You were cheated, of course it hurts! We don’t expect you to be happy about it, but you don’t need to verbalize that it hurts, it’s understood.
The whole post reads like a 8-year-old’s sob story about having his chocolate bar being stolen.
The quoted OP’s suffering with the ordeal of being cheated is valid but AI made a mockery of his pain.
So yeah, definitely agree. AI text is only as good as what it’s based on, we’re right back at the garbage-in, garbage-out adage about computing that our forefathers were poetic about.
Yeah, good uses will be there, but now i need to have some mental checks against possible AI spam text.
I dont always know that the other guy is using AI to post. It can be just convincing enough to seem ok, until at some point you see that the context was lost in the reply. I might take the time to read and write and then figure out that the other guy is just wasting time / trolling. Good tool for trolling though, hehe.
And sometimes you just get a huge amount of text. For now, i just skip or at best scan it quickly and only read if what looks interesting.
For some reason these AI bots use a lot of – too. That’s another way to pick up some of the AI text. Maybe one day people will start spamming so much that input for these AI tools will get degraded.
AI is useful when the poster already understands what they want. When someone just feeds it a half-baked idea, doesn’t fact-check, doesn’t edit, and then posts a wall of text they clearly don’t own, it becomes noise. The problem there is effort and accountability, not AI.
Bad posts existed long before AI. Now they’re just longer, more confident-sounding, and easier to produce. Garbage in, garbage out still applies as @rsaeon said. If the person isn’t invested in what they’re saying, no amount of polish makes it worth reading.
That said, a non-AI supremacy stance isn’t a higher standard either, IMO. Refusing a tool doesn’t make the thinking better. It’s like refusing to use a calculator or GPS and calling it purity. Being good at cooking doesn’t mean you have to hate microwave ovens. If you don’t know what you’re doing, the food will still be bad. If you do, it just gets done faster. And the microwave can do things traditional cooking methods simply can’t. Accuracy, ownership, and effort matters not whether a tool was used or not.
I have seen a few of these channels now with click bait titles around geopolitics.
Sort-of-still, old Western person talking. Did not seem fake, this AI stuff looks pretty convincing, but i guess experienced people might know how to look for tells. Accents i don’t know, i cant tell.
Typically they have only few videos in the channel, atleast the ones i saw. So maybe these channels are getting banned.
Youtube recommendations for videos from new and unfamiliar channels are suspect for me now. I dont bother clicking them.
The best way to explain this to the layman to not use AI for everything is to make them watch Friends ep about Joey and the thesaurus. Once you use AI for everything, the original meaning is lost if you do not understand the final output.