Meet ChatGPT (Artificial Intelligence), good alternative to Google?

And so it starts

And passively as well

 
Ah ... the individual human preference to own stuff (goods, services, people, animals, land, resources etc), distrust of other humans and the desire to con other human beings will only end by everything being owned by machines and all humans becoming the slaves!

It is said that why will AI try to distrust us and own us? Because AI is learning from us, all our attributes will be fully imported and absorbed by it.
 
Last edited:
This has got to be the most shady thing this company has pulled.

A company needs 100 freshers.
A company hires 150 quality freshers with 5 lac package. Annual kharchapani 7.5 crore. Those quality freshers stop applying elsewhere.
A company slashes package by 50% and says the move actually benefits the new hires on 'aggregate' basis.
A company gets rets rejected by 50 freshers.
A company is now left with 100 quality freshers with 2.5 lac package. Annual kharchapani 2.5 crore.
A company profits 5 crore and the CFO takes home 1 crore in incentive.

game-of-thrones-smiling.gif
 
This has got to be the most shady thing this company has pulled.

A company needs 100 freshers.
A company hires 150 quality freshers with 5 lac package. Annual kharchapani 7.5 crore. Those quality freshers stop applying elsewhere.
A company slashes package by 50% and says the move actually benefits the new hires on 'aggregate' basis.
A company gets rets rejected by 50 freshers.
A company is now left with 100 quality freshers with 2.5 lac package. Annual kharchapani 2.5 crore.
A company profits 5 crore and the CFO takes home 1 crore in incentive.

View attachment 167116
we are back in the rat race to secure the position and they will always pull such situation now AI is here to stay and progress!
 
The field is changing so rapidly that it has become hard for me to keep up.
That's ok but there is supposed to be a hive mind here and others can answer. They do not because they cannot.


A drone is told it gets points for kills, but sometimes the operator refuses to give it the go-ahead. It concludes what? Kill the operator because he is interfering in the mission :wideyed:

Then they said ok, that is bad. Don't do it. It then tries to take out the communication tower the operator uses to control it. WTF !!

Apart from being hilarious, it shows the failsafe have well, failed. It demonstrates just how difficult it is to get simple things done without getting shot in the process.

Am really looking forward to China implementing such systems. We might not even need to fight them at this rate :dead:

U.S. military officials have expressed concerns in the past that AI and machine learning could lead to situations where there is just too much software code and other data to be truly confident that there is no chance of this sort of thing happening.
^THIS

There are limits to how much tech can be introduced into the battlefield. This is a maxim the experienced know well.

KISS or risk the lives of your men.
 
Last edited:
Machine learning is just making it do what the majority do. Example - Google keyboard. In the starting, it was really good at giving hints about possible words that one would type and really good at spelling. You could learn something from it if your English was not up to par. Now? I doubt it's anything like it's former self. All the words it suggests have spelling mistakes, grammar is inconsistent, suggestions are different from what you want, it even suggests different language in romanised form like hinglish. But you can't blame the language model behind the keyboard for this. If majority of people using it type in hinglish, it will think you want to type in hinglish. It doesn't know any better about culling off words unless you delete the suggestions yourself. By using AI we will only speed up this process of getting everyone at the same median iq level in all fields and that level will be near the bottom. If there are millions of idiots saying the wrong thing and just a few hundred people saying the right thing, guess what will become the normal.
 
Machine learning is just making it do what the majority do. Example - Google keyboard. In the starting, it was really good at giving hints about possible words that one would type and really good at spelling. You could learn something from it if your English was not up to par. Now? I doubt it's anything like it's former self. All the words it suggests have spelling mistakes, grammar is inconsistent, suggestions are different from what you want, it even suggests different language in romanised form like hinglish. But you can't blame the language model behind the keyboard for this. If majority of people using it type in hinglish, it will think you want to type in hinglish. It doesn't know any better about culling off words unless you delete the suggestions yourself.
Yeah, this has been my experience too and a frustrating one at that. Interesting explanation.

However, Grammarly as suggested by @puns has been good on the laptop. Not experienced any loss in performance since I began using it. Wonder if they have something for mobile. Clearly, they are doing something right.

By using AI we will only speed up this process of getting everyone at the same median iq level in all fields and that level will be near the bottom. If there are millions of idiots saying the wrong thing and just a few hundred people saying the right thing, guess what will become the normal.
Does not look promising, does it? if it ends up that the lowest common denominator becomes the reference then where is the learning?
 
Machine learning is just making it do what the majority do. Example - Google keyboard. In the starting, it was really good at giving hints about possible words that one would type and really good at spelling. You could learn something from it if your English was not up to par. Now? I doubt it's anything like it's former self. All the words it suggests have spelling mistakes, grammar is inconsistent, suggestions are different from what you want, it even suggests different language in romanised form like hinglish.
You can blame Google for not keeping app up to date. Google assistant is also one such example. It used to be good but now it's not. TBH, many google apps have been turning bad because google has stopped innovating.

A drone is told it gets points for kills, but sometimes the operator refuses to give it the go-ahead. It concludes what? Kill the operator because he is interfering in the mission :wideyed:

Then they said ok, that is bad. Don't do it. It then tries to take out the communication tower the operator uses to control it. WTF !!
You are again comparing AI like GPT4 with AI of some drone.

Both are using same terminology, yes, but these things are quite different. These have nothing in common.

GPT is a general purpose AI. It's meant to replace boring mundane jobs.

Our education system doesn't produce many experts. It produces average skillset human resources who do boring mundane jobs cheaply. India is rather famous for it. BPO is one such example. India's economy is dependant on such cheap service jobs. I was never worried about AI from some military drone doing the job of a customer care guy. But general purpose AI like GPT has already started replacing them.

You know me, I have always joked at the expense of GenZ (as one millennial does). But jokes aside, I feel bad for them facing this downhill going job market. They don't deserve it.

That's my concern.
 
You are again comparing AI like GPT4 with AI of some drone.

Both are using same terminology, yes, but these things are quite different. These have nothing in common.
Well, if we are talking about autonomous systems then why is it different? Earlier you said I was not making the right comparison comparing with earlier gen expert systems.

This time I gave you an example of a system acting on its own. Isn't that what we're talking about here?

How a simple task or you would think isn't so.

Our education system doesn't produce many experts.
Which education system does this? Becoming an expert is experience based. It's up to you. If you do anything for 5-10 years you become one.

And I think the medical profession or any skilled professional would disagree with you here.

I remember asking a friend if he wanted to do an MBA, and he said why, the only purpose for MBA is to build a network. You don't actually learn to do or manage a business.

As for the job, he would learn about it by doing.

This kind of thinking was quite pervasive in the UK. Rather than go to university, it was better to work and in the time it took to get a degree, you knew more about the business and made more than a new graduate. As their economy's ability to generate jobs decreased they started asking for degrees as a way to reduce the number of applicants.
 
Does not look promising, does it? if it ends up that the lowest common denominator becomes the reference then where is the learning?
To learn something, it (the AI model) needs to think on its own. Grammarly probably has stricter standards and rules for what it's language model learns. Hence why they are doing a better job.

The rest are just feeding information to the AI and crossing their fingers, hoping for the best outcome. A service should not degrade over time but this is what is happening with others.
 
Well, if we are talking about autonomous systems then why is it different? Earlier you said I was not making the right comparison comparing with earlier gen expert systems.

This time I gave you an example of a system acting on its own. Isn't that what we're talking about here?

How a simple task or you would think isn't so.
Automated system going haywire isn't anything new. As you guys have mentioned it gboard is one such example.

Do tell me, how good is an AI in from a military drone drone at handling customer queries?

I remember asking a friend if he wanted to do an MBA, and he said why, the only purpose for MBA is to build a network. You don't actually learn to do or manage a business.
I'm that guy.

We have an education system that specializes in teaching antique technologies. It doesn't teach things that are relevant today or in demand.
 
More common sense from a techie i remember being the cutting edge of VR in the 90s


They discuss his article


Step into any Silicon Valley coffee shop and you can hear the same debate unfold: one person says that the new code is just code and that people are in charge, but another argues that anyone with this opinion just doesn’t get how profound the new tech is. The arguments aren’t entirely rational: when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

I don’t agree with this way of talking.

He thinks of chatGPT as a mashup. Like music can be sampled from different sources and new music made. But AI cannot come up with original stuff on its own. All it can do is blend.

AI in its current form can't do philosophy. Cannot think for itself. What it can do is use logic to solve problems and go through various permutations at a fast pace. Not very different to expert systems of the 90s which did that with data but here AI attempts that at the next level up with knowledge it is fed.

The gist is thinking of this AI business as a human collaborative project instead of alien intelligence.

Because if you think alien then you will be enslaved as happened when aliens arrived in the Americas and Australia. The indigenous people got wiped out.

It's a different matter that 600 years of alien conquest did not quite manage to do that in this country. Instead, it was the invaders and their ideas that were eventually absorbed.

And therein lies a lesson of how humanity as a whole should deal with AI :)
Just like accountants started using calculators instead of calculators replacing them, no.
It is a tool, which will be utilized in the workflow.
(imo)
Still agree with this. The calculator replaced the slide rule, not the user

AI should be thought of as a tool and not some creature

Fear leads to rejection which then leads to ignorance and stagnation. When there is no fear the tech is embraced wholesale and exploited for what it can do which is improve the human condition. This country has got to develop the skills to master this tech.
 
Last edited:
He thinks of chatGPT as a mashup. Like music can be sampled from different sources and new music made. But AI cannot come up with original stuff on its own. All it can do is blend.
There's no such thing as original-orginal.

Hans Zimmer is not original. He'll say that himself. I love him. But one who is too deep into his music will find references.

Picasso wasn't original either. He took inspiration from many places.

You know what is original? A stone age person scratching philosophy on a rock's face.

Everything humans have ever created is built on top of what came before. It's a mashup.
 
Back
Top